text
stringlengths
21
172k
source
stringlengths
32
113
Inmathematics, more specificallyfunctional analysisandoperator theory, the notion ofunbounded operatorprovides an abstract framework for dealing withdifferential operators, unboundedobservablesinquantum mechanics, and other cases. The term "unbounded operator" can be misleading, since In contrast tobounded operators, unbounded operators on a given space do not form analgebra, nor even a linear space, because each one is defined on its own domain. The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework forquantum mechanics.[1]The theory's development is due toJohn von Neumann[2]andMarshall Stone.[3]Von Neumann introduced usinggraphsto analyze unbounded operators in 1932.[4] LetX,YbeBanach spaces. Anunbounded operator(or simplyoperator)T:D(T) →Yis alinear mapTfrom a linear subspaceD(T) ⊆X—the domain ofT—to the spaceY.[5]Contrary to the usual convention,Tmay not be defined on the whole spaceX. An operatorTis said to beclosedif its graphΓ(T)is aclosed set.[6](Here, the graphΓ(T)is a linear subspace of thedirect sumX⊕Y, defined as the set of all pairs(x,Tx), wherexruns over the domain ofT.) Explicitly, this means that for every sequence{xn}of points from the domain ofTsuch thatxn→xandTxn→y, it holds thatxbelongs to the domain ofTandTx=y.[6]The closedness can also be formulated in terms of thegraph norm: an operatorTis closed if and only if its domainD(T)is acomplete spacewith respect to the norm:[7] An operatorTis said to bedensely definedif its domain isdenseinX.[5]This also includes operators defined on the entire spaceX, since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (ifXandYare Hilbert spaces) and the transpose; see the sections below. IfT:D(T) →Yis closed, densely defined andcontinuouson its domain, then its domain is all ofX.[nb 1] A densely defined symmetric[clarification needed]operatorTon aHilbert spaceHis calledbounded from belowifT+ais a positive operator for some real numbera. That is,⟨Tx|x⟩ ≥ −a||x||2for allxin the domain ofT(or alternatively⟨Tx|x⟩ ≥a||x||2sinceais arbitrary).[8]If bothTand−Tare bounded from below thenTis bounded.[8] LetC([0, 1])denote the space of continuous functions on the unit interval, and letC1([0, 1])denote the space of continuously differentiable functions. We equipC([0,1]){\displaystyle C([0,1])}with the supremum norm,‖⋅‖∞{\displaystyle \|\cdot \|_{\infty }}, making it a Banach space. Define the classical differentiation operator⁠d/dx⁠:C1([0, 1]) →C([0, 1])by the usual formula: Every differentiable function is continuous, soC1([0, 1]) ⊆C([0, 1]). We claim that⁠d/dx⁠:C([0, 1]) →C([0, 1])is a well-defined unbounded operator, with domainC1([0, 1]). For this, we need to show thatddx{\displaystyle {\frac {d}{dx}}}is linear and then, for example, exhibit some{fn}n⊂C1([0,1]){\displaystyle \{f_{n}\}_{n}\subset C^{1}([0,1])}such that‖fn‖∞=1{\displaystyle \|f_{n}\|_{\infty }=1}andsupn‖ddxfn‖∞=+∞{\displaystyle \sup _{n}\|{\frac {d}{dx}}f_{n}\|_{\infty }=+\infty }. This is a linear operator, since a linear combinationa f+bgof two continuously differentiable functionsf,gis also continuously differentiable, and The operator is not bounded. For example, satisfy but asn→∞{\displaystyle n\to \infty }. The operator is densely defined, and closed. The same operator can be treated as an operatorZ→Zfor many choices of Banach spaceZand not be bounded between any of them. At the same time, it can be bounded as an operatorX→Yfor other pairs of Banach spacesX,Y, and also as operatorZ→Zfor some topological vector spacesZ.[clarification needed]As an example letI⊂Rbe an open interval and consider where: The adjoint of an unbounded operator can be defined in two equivalent ways. LetT:D(T)⊆H1→H2{\displaystyle T:D(T)\subseteq H_{1}\to H_{2}}be an unbounded operator between Hilbert spaces. First, it can be defined in a way analogous to how one defines the adjoint of a bounded operator. Namely, the adjointT∗:D(T∗)⊆H2→H1{\displaystyle T^{*}:D\left(T^{*}\right)\subseteq H_{2}\to H_{1}}ofTis defined as an operator with the property:⟨Tx∣y⟩2=⟨x∣T∗y⟩1,x∈D(T).{\displaystyle \langle Tx\mid y\rangle _{2}=\left\langle x\mid T^{*}y\right\rangle _{1},\qquad x\in D(T).}More precisely,T∗y{\displaystyle T^{*}y}is defined in the following way. Ify∈H2{\displaystyle y\in H_{2}}is such thatx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is a continuous linear functional on the domain ofT, theny{\displaystyle y}is declared to be an element ofD(T∗),{\displaystyle D\left(T^{*}\right),}and after extending the linear functional to the whole space via theHahn–Banach theorem, it is possible to find somez{\displaystyle z}inH1{\displaystyle H_{1}}such that⟨Tx∣y⟩2=⟨x∣z⟩1,x∈D(T),{\displaystyle \langle Tx\mid y\rangle _{2}=\langle x\mid z\rangle _{1},\qquad x\in D(T),}sinceRiesz representation theoremallows the continuous dual of the Hilbert spaceH1{\displaystyle H_{1}}to be identified with the set of linear functionals given by the inner product. This vectorz{\displaystyle z}is uniquely determined byy{\displaystyle y}if and only if the linear functionalx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is densely defined; or equivalently, ifTis densely defined. Finally, lettingT∗y=z{\displaystyle T^{*}y=z}completes the construction ofT∗,{\displaystyle T^{*},}which is necessarily a linear map. The adjointT∗y{\displaystyle T^{*}y}exists if and only ifTis densely defined. By definition, the domain ofT∗{\displaystyle T^{*}}consists of elementsy{\displaystyle y}inH2{\displaystyle H_{2}}such thatx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is continuous on the domain ofT. Consequently, the domain ofT∗{\displaystyle T^{*}}could be anything; it could be trivial (that is, contains only zero).[9]It may happen that the domain ofT∗{\displaystyle T^{*}}is a closedhyperplaneandT∗{\displaystyle T^{*}}vanishes everywhere on the domain.[10][11]Thus, boundedness ofT∗{\displaystyle T^{*}}on its domain does not imply boundedness ofT. On the other hand, ifT∗{\displaystyle T^{*}}is defined on the whole space thenTis bounded on its domain and therefore can be extended by continuity to a bounded operator on the whole space.[nb 2]If the domain ofT∗{\displaystyle T^{*}}is dense, then it has its adjointT∗∗.{\displaystyle T^{**}.}[12]A closed densely defined operatorTis bounded if and only ifT∗{\displaystyle T^{*}}is bounded.[nb 3] The other equivalent definition of the adjoint can be obtained by noticing a general fact. Define a linear operatorJ{\displaystyle J}as follows:[12]{J:H1⊕H2→H2⊕H1J(x⊕y)=−y⊕x{\displaystyle {\begin{cases}J:H_{1}\oplus H_{2}\to H_{2}\oplus H_{1}\\J(x\oplus y)=-y\oplus x\end{cases}}}SinceJ{\displaystyle J}is an isometric surjection, it is unitary. Hence:J(Γ(T))⊥{\displaystyle J(\Gamma (T))^{\bot }}is the graph of some operatorS{\displaystyle S}if and only ifTis densely defined.[13]A simple calculation shows that this "some"S{\displaystyle S}satisfies:⟨Tx∣y⟩2=⟨x∣Sy⟩1,{\displaystyle \langle Tx\mid y\rangle _{2}=\langle x\mid Sy\rangle _{1},}for everyxin the domain ofT. ThusS{\displaystyle S}is the adjoint ofT. It follows immediately from the above definition that the adjointT∗{\displaystyle T^{*}}is closed.[12]In particular, a self-adjoint operator (meaningT=T∗{\displaystyle T=T^{*}}) is closed. An operatorTis closed and densely defined if and only ifT∗∗=T.{\displaystyle T^{**}=T.}[nb 4] Some well-known properties for bounded operators generalize to closed densely defined operators. The kernel of a closed operator is closed. Moreover, the kernel of a closed densely defined operatorT:H1→H2{\displaystyle T:H_{1}\to H_{2}}coincides with the orthogonal complement of the range of the adjoint. That is,[14]ker⁡(T)=ran⁡(T∗)⊥.{\displaystyle \operatorname {ker} (T)=\operatorname {ran} (T^{*})^{\bot }.}von Neumann's theoremstates thatT∗T{\displaystyle T^{*}T}andTT∗{\displaystyle TT^{*}}are self-adjoint, and thatI+T∗T{\displaystyle I+T^{*}T}andI+TT∗{\displaystyle I+TT^{*}}both have bounded inverses.[15]IfT∗{\displaystyle T^{*}}has trivial kernel,Thas dense range (by the above identity.) Moreover: In contrast to the bounded case, it is not necessary that(TS)∗=S∗T∗,{\displaystyle (TS)^{*}=S^{*}T^{*},}since, for example, it is even possible that(TS)∗{\displaystyle (TS)^{*}}does not exist.[citation needed]This is, however, the case if, for example,Tis bounded.[16] A densely defined, closed operatorTis callednormalif it satisfies the following equivalent conditions:[17] Every self-adjoint operator is normal. LetT:B1→B2{\displaystyle T:B_{1}\to B_{2}}be an operator between Banach spaces. Then thetranspose(ordual)tT:B2∗→B1∗{\displaystyle {}^{t}T:{B_{2}}^{*}\to {B_{1}}^{*}}ofT{\displaystyle T}is the linear operator satisfying:⟨Tx,y′⟩=⟨x,(tT)y′⟩{\displaystyle \langle Tx,y'\rangle =\langle x,\left({}^{t}T\right)y'\rangle }for allx∈B1{\displaystyle x\in B_{1}}andy∈B2∗.{\displaystyle y\in B_{2}^{*}.}Here, we used the notation:⟨x,x′⟩=x′(x).{\displaystyle \langle x,x'\rangle =x'(x).}[18] The necessary and sufficient condition for the transpose ofT{\displaystyle T}to exist is thatT{\displaystyle T}is densely defined (for essentially the same reason as to adjoints, as discussed above.) For any Hilbert spaceH,{\displaystyle H,}there is the anti-linear isomorphism:J:H∗→H{\displaystyle J:H^{*}\to H}given byJf=y{\displaystyle Jf=y}wheref(x)=⟨x∣y⟩H,(x∈H).{\displaystyle f(x)=\langle x\mid y\rangle _{H},(x\in H).}Through this isomorphism, the transposetT{\displaystyle {}^{t}T}relates to the adjointT∗{\displaystyle T^{*}}in the following way:[19]T∗=J1(tT)J2−1,{\displaystyle T^{*}=J_{1}\left({}^{t}T\right)J_{2}^{-1},}whereJj:Hj∗→Hj{\displaystyle J_{j}:H_{j}^{*}\to H_{j}}. (For the finite-dimensional case, this corresponds to the fact that the adjoint of a matrix is its conjugate transpose.) Note that this gives the definition of adjoint in terms of a transpose. Closed linear operators are a class oflinear operatorsonBanach spaces. They are more general thanbounded operators, and therefore not necessarilycontinuous, but they still retain nice enough properties that one can define thespectrumand (with certain assumptions)functional calculusfor such operators. Many important linear operators which fail to be bounded turn out to be closed, such as thederivativeand a large class ofdifferential operators. LetX,Ybe twoBanach spaces. Alinear operatorA:D(A) ⊆X→Yisclosedif for everysequence{xn}inD(A)convergingtoxinXsuch thatAxn→y∈Yasn→ ∞one hasx∈D(A)andAx=y. Equivalently,Ais closed if itsgraphisclosedin thedirect sumX⊕Y. Given a linear operatorA, not necessarily closed, if the closure of its graph inX⊕Yhappens to be the graph of some operator, that operator is called theclosureofA, and we say thatAisclosable. Denote the closure ofAbyA. It follows thatAis therestrictionofAtoD(A). Acore(oressential domain) of a closable operator is asubsetCofD(A)such that the closure of the restriction ofAtoCisA. Consider thederivativeoperatorA=⁠d/dx⁠whereX=Y=C([a,b])is the Banach space of allcontinuous functionson aninterval[a,b]. If one takes its domainD(A)to beC1([a,b]), thenAis a closed operator which is not bounded.[20]On the other hand ifD(A) =C∞([a,b]), thenAwill no longer be closed, but it will be closable, with the closure being its extension defined onC1([a,b]). An operatorTon a Hilbert space issymmetricif and only if for eachxandyin the domain ofTwe have⟨Tx∣y⟩=⟨x∣Ty⟩{\displaystyle \langle Tx\mid y\rangle =\langle x\mid Ty\rangle }. A densely defined operatorTis symmetric if and only if it agrees with its adjointT∗restricted to the domain ofT, in other words whenT∗is an extension ofT.[21] In general, ifTis densely defined and symmetric, the domain of the adjointT∗need not equal the domain ofT. IfTis symmetric and the domain ofTand the domain of the adjoint coincide, then we say thatTisself-adjoint.[22]Note that, whenTis self-adjoint, the existence of the adjoint implies thatTis densely defined and sinceT∗is necessarily closed,Tis closed. A densely defined operatorTissymmetric, if the subspaceΓ(T)(defined in a previous section) is orthogonal to its imageJ(Γ(T))underJ(whereJ(x,y):=(y,-x)).[nb 6] Equivalently, an operatorTisself-adjointif it is densely defined, closed, symmetric, and satisfies the fourth condition: both operatorsT–i,T+iare surjective, that is, map the domain ofTonto the whole spaceH. In other words: for everyxinHthere existyandzin the domain ofTsuch thatTy–iy=xandTz+iz=x.[23] An operatorTisself-adjoint, if the two subspacesΓ(T),J(Γ(T))are orthogonal and their sum is the whole spaceH⊕H.{\displaystyle H\oplus H.}[12] This approach does not cover non-densely defined closed operators. Non-densely defined symmetric operators can be defined directly or via graphs, but not via adjoint operators. A symmetric operator is often studied via itsCayley transform. An operatorTon a complex Hilbert space is symmetric if and only if the number⟨Tx∣x⟩{\displaystyle \langle Tx\mid x\rangle }is real for allxin the domain ofT.[21] A densely defined closed symmetric operatorTis self-adjoint if and only ifT∗is symmetric.[24]It may happen that it is not.[25][26] A densely defined operatorTis calledpositive[8](ornonnegative[27]) if its quadratic form is nonnegative, that is,⟨Tx∣x⟩≥0{\displaystyle \langle Tx\mid x\rangle \geq 0}for allxin the domain ofT. Such operator is necessarily symmetric. The operatorT∗Tis self-adjoint[28]and positive[8]for every densely defined, closedT. Thespectral theoremapplies to self-adjoint operators[29]and moreover, to normal operators,[30][31]but not to densely defined, closed operators in general, since in this case the spectrum can be empty.[32][33] A symmetric operator defined everywhere is closed, therefore bounded,[6]which is theHellinger–Toeplitz theorem.[34] By definition, an operatorTis anextensionof an operatorSifΓ(S) ⊆ Γ(T).[35]An equivalent direct definition: for everyxin the domain ofS,xbelongs to the domain ofTandSx=Tx.[5][35] Note that an everywhere defined extension exists for every operator, which is a purely algebraic fact explained atDiscontinuous linear map § General existence theoremand based on theaxiom of choice. If the given operator is not bounded then the extension is adiscontinuous linear map. It is of little use since it cannot preserve important properties of the given operator (see below), and usually is highly non-unique. An operatorTis calledclosableif it satisfies the following equivalent conditions:[6][35][36] Not all operators are closable.[37] A closable operatorThas the least closed extensionT¯{\displaystyle {\overline {T}}}called theclosureofT. The closure of the graph ofTis equal to the graph ofT¯.{\displaystyle {\overline {T}}.}[6][35]Other, non-minimal closed extensions may exist.[25][26] A densely defined operatorTis closable if and only ifT∗is densely defined. In this caseT¯=T∗∗{\displaystyle {\overline {T}}=T^{**}}and(T¯)∗=T∗.{\displaystyle ({\overline {T}})^{*}=T^{*}.}[12][38] IfSis densely defined andTis an extension ofSthenS∗is an extension ofT∗.[39] Every symmetric operator is closable.[40] A symmetric operator is calledmaximal symmetricif it has no symmetric extensions, except for itself.[21]Every self-adjoint operator is maximal symmetric.[21]The converse is wrong.[41] An operator is calledessentially self-adjointif its closure is self-adjoint.[40]An operator is essentially self-adjoint if and only if it has one and only one self-adjoint extension.[24] A symmetric operator may have more than one self-adjoint extension, and even a continuum of them.[26] A densely defined, symmetric operatorTis essentially self-adjoint if and only if both operatorsT–i,T+ihave dense range.[42] LetTbe a densely defined operator. Denoting the relation "Tis an extension ofS" byS⊂T(a conventional abbreviation for Γ(S) ⊆ Γ(T)) one has the following.[43] The class ofself-adjoint operatorsis especially important in mathematical physics. Every self-adjoint operator is densely defined, closed and symmetric. The converse holds for bounded operators but fails in general. Self-adjointness is substantially more restricting than these three properties. The famousspectral theoremholds for self-adjoint operators. In combination withStone's theorem on one-parameter unitary groupsit shows that self-adjoint operators are precisely the infinitesimal generators of strongly continuous one-parameter unitary groups, seeSelf-adjoint operator § Self-adjoint extensions in quantum mechanics. Such unitary groups are especially important for describingtime evolutionin classical and quantum mechanics. This article incorporates material from Closed operator onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Unbounded_operator
Anapostolic nuncio(Latin:nuntius apostolicus; also known as apapal nuncioor simply as anuncio) is anecclesiasticaldiplomat, serving as an envoy or a permanent diplomatic representative of theHoly Seeto astateor to an international organization. A nuncio is appointed by and represents the Holy See, and is the head of thediplomatic mission, called anapostolic nunciature, which is the equivalent of anembassy. The Holy See is legally distinct from theVatican Cityor theCatholic Church. In modern times, a nuncio is usually aArchbishop. An apostolic nuncio is generally equivalent in rank to that ofambassadorextraordinary andplenipotentiary, although inCatholic countriesthe nuncio often ranks above ambassadors in diplomatic protocol. A nuncio performs the same functions as an ambassador and has the same diplomatic privileges. Under the 1961Vienna Convention on Diplomatic Relations, to which the Holy See is a party, a nuncio is an ambassador like those from any other country. The Vienna Convention allows the host state to grant seniority of precedence to the nuncio over others of ambassadorial rank accredited to the same country, and may grant thedeanship of that country's diplomatic corpsto the nuncio regardless of seniority.[1]The representative of the Holy See in some situations is called a Delegate or, in the case of the United Nations, Permanent Observer. In the Holy See hierarchy, these usually rank equally to a nuncio, but they do not have formal diplomatic status, though in some countries they have some diplomatic privileges. In addition, the nuncio serves as the liaison between the Holy See and the Church in that particular nation, supervising the diocesan episcopate (usually a national or multinationalconference of bishopswhich has its own chairman, elected by its members). The nuncio has an important role in the selection of bishops. The name "nuncio" derived from the ancientLatinwordnuntius, meaning "envoy" or "messenger". Since such envoys are accredited to theHoly Seeas such and not to theState of Vatican City, the term "nuncio" (versus "ambassador") emphasizes the unique nature of the diplomatic mission.[2]The1983 Code of Canon Lawclaims the "innate right" to send and receive delegates independent from interference of non-ecclesiastical civil power.Canon lawonly recognizesinternational lawlimitations on this right.[2] Article 16 of theVienna Convention on Diplomatic Relationsprovides: In accordance with this article, many states (even not predominantly Catholic ones such as Germany and Switzerland and including the great majority in central and western Europe and in the Americas) give precedence to the nuncio over other diplomatic representatives, according him the position ofDean of the Diplomatic Corpsreserved in other countries for the longest-serving resident ambassador. Holy See representatives called permanent observers are accredited to several international organisations, including offices or agencies of the United Nations, and other organizations either specialized in their mission or regional or both. A permanent observer of the Holy See is always a cleric, often a titular archbishop with the rank of nuncio, but there has been considerable variation between offices and over time.[clarification needed]
https://en.wikipedia.org/wiki/Nuncio
Incomputer networks, atunneling protocolis acommunication protocolwhich allows for the movement of data from one network to another. They can, for example, allowprivate networkcommunications to be sent across a public network (such as theInternet), or for one network protocol to be carried over an incompatible network, through a process calledencapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps withencryptionas standard, it can hide the nature of the traffic that is run through a tunnel. Tunneling protocols work by using the data portion of apacket(thepayload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of theOSIorTCP/IPprotocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol. A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as runningIPv6overIPv4. Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporatenetwork addressto a remote user whose physical network address is not part of the corporate network. Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such asHTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies). Another HTTP-based tunneling method uses theHTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection.[1]Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows connections only to specific ports, such as 443 for HTTPS.[2] Other tunneling methods able to bypass network firewalls make use of different protocols such asDNS,[3]MQTT,[4]SMS.[5] As an example of network layer over network layer,Generic Routing Encapsulation(GRE), a protocol running over IP (IP protocol number47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network. It is also possible to establish a connection using the data link layer. TheLayer 2 Tunneling Protocol(L2TP) allows the transmission offramesbetween two nodes. A tunnel is not encrypted by default: theTCP/IPprotocol chosen determines the level of security. SSHuses port 22 to enable data encryption of payloads being transmitted over a public network (such as the Internet) connection, thereby providingVPNfunctionality.IPsechas an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway. To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets. Tunneling a TCP-encapsulatingpayload (such asPPP) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance — known as theTCP meltdown problem[6][7]which is whyvirtual private network(VPN) software may instead use a protocol simpler than TCP for the tunnel connection. TCP meltdown occurs when a TCP connection is stacked on top of another. The underlying layer may detect a problem and attempt to compensate, and the layer above it then overcompensates because of that, and this overcompensation causes said delays and degraded transmission performance. ASecure Shell(SSH) tunnelconsists of an encrypted tunnel created through anSSH protocolconnection. Users may set up SSH tunnels to transferunencryptedtraffic over a network through anencryptedchannel. It is a software-based approach to network security and the result is transparent encryption.[8] For example, Microsoft Windows machines can share files using theServer Message Block(SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security. Once an SSH connection has been established, the tunnel starts with SSH listening to a port on theremote or local host. Any connections to it are forwarded to the specifiedaddress and port originating from theopposing (remote or local, as previously) host. TheTCP meltdown problemis often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination.[9]Naturally, this wrapping and unwrapping also occurs in the reverse direction of the bidirectional tunnel. SSH tunnels provide a means to bypassfirewallsthat prohibit certain Internet services – so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization'sproxy filter(which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSHserver, they can create an SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To access the remote web server, users would point theirbrowserto the local port at http://localhost/ Some SSH clients support dynamicport forwardingthat allows the user to create aSOCKS4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating an SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application does not support SOCKS, a proxifier can be used to redirect the application to the local SOCKS proxy server. Some proxifiers, such as Proxycap, support SSH directly, thus avoiding the need for an SSH client. In recent versions of OpenSSH it is even allowed to createlayer 2 or layer 3 tunnelsif both ends have enabled such tunneling capabilities. This createstun(layer 3, default) ortap(layer 2) virtual interfaces on both ends of the connection. This allows normal network management and routing to be used, and when used on routers, the traffic for an entire subnetwork can be tunneled. A pair oftapvirtual interfaces function like an Ethernet cable connecting both ends of the connection and can join kernel bridges. Over the years, tunneling anddata encapsulationin general have been frequently adopted for malicious reasons, in order to maliciously communicate outside of a protected network. In this context, known tunnels involve protocols such asHTTP,[10]SSH,[11]DNS,[12][13]MQTT.[14]
https://en.wikipedia.org/wiki/Secure_Shell_tunneling
Ensemble forecastingis a method used in or withinnumerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form ofMonte Carlo analysis. The multiple simulations are conducted to account for the two usual sources ofuncertaintyin forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by thechaoticnature of the equations of the atmosphere, which is often referred to assensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemblespread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of anydynamical system, and not just for weather prediction. Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including: Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by theUS NavyandAir Force. There are various ways of viewing the data such asspaghetti plots,ensemble meansorPostage Stampswhere a number of different results from the models run can be compared. As proposed byEdward Lorenzin 1963, it is impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree ofskillowing to thechaotic natureof thefluid dynamicsequations involved.[1]Furthermore, existing observation networks have limited spatial and temporal resolution (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as theLiouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers.[2]The practical importance of ensemble forecasts derives from the fact that in a chaotic and hence nonlinear system, the rate of growth of forecast error is dependent on starting conditions. An ensemble forecast therefore provides a prior estimate of state-dependent predictability, i.e. an estimate of the types of weather that might occur, given inevitable uncertainties in the forecast initial conditions and in the accuracy of the computational representation of the equations. These uncertainties limit forecast model accuracy to about six days into the future.[3]The first operational ensemble forecasts were produced for sub-seasonal timescales in 1985.[4]However, it was realised that the philosophy underpinning such forecasts was also relevant on shorter timescales – timescales where predictions had previously been made by purely deterministic means. Edward Epsteinrecognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed astochasticdynamic model that producedmeansandvariancesfor the state of the atmosphere.[5]Although theseMonte Carlo simulationsshowed skill, in 1974Cecil Leithrevealed that they produced adequate forecasts only when the ensembleprobability distributionwas a representative sample of the probability distribution in the atmosphere.[6]It was not until 1992 that ensemble forecasts began being prepared by theEuropean Centre for Medium-Range Weather Forecasts(ECMWF) and theNational Centers for Environmental Prediction(NCEP). There are two main sources of uncertainty that must be accounted for when making an ensemble weather forecast: initial condition uncertainty and model uncertainty.[7] Initial condition uncertainty arises due to errors in the estimate of the starting conditions for the forecast, both due to limited observations of the atmosphere, and uncertainties involved in using indirect measurements, such assatellite data, to measure the state of atmospheric variables. Initial condition uncertainty is represented by perturbing the starting conditions between the different ensemble members. This explores the range of starting conditions consistent with our knowledge of the current state of the atmosphere, together with its past evolution. There are a number of ways to generate these initial condition perturbations. The ECMWF model, the Ensemble Prediction System (EPS),[8]uses a combination ofsingular vectorsand an ensemble ofdata assimilations(EDA) to simulate the initialprobability density.[9]The singular vector perturbations are more active in the extra-tropics, while the EDA perturbations are more active in the tropics. The NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known asvector breeding.[10][11]Perturbing the initial state from the satellite measurements such that the perturbed states look physical is a tough task. Also, deep learning has brought forward some techniques to perturb the complex initial state almost physically using flow matching.[12] Model uncertainty arises due to the limitations of the forecast model. The process of representing the atmosphere in a computer model involves many simplifications such as the development ofparametrisationschemes, which introduce errors into the forecast. Several techniques to represent model uncertainty have been proposed. When developing aparametrisationscheme, many new parameters are introduced to represent simplified physical processes. These parameters may be very uncertain. For example, the 'entrainmentcoefficient' represents theturbulentmixing of dry environmental air into aconvective cloud, and so represents a complex physical process using a single number. In a perturbed parameter approach, uncertain parameters in the model's parametrisation schemes are identified and their value changed between ensemble members. While in probabilistic climate modelling, such asclimateprediction.net, these parameters are often held constant globally and throughout the integration,[13]in modern numerical weather prediction it is more common to stochastically vary the value of the parameters in time and space.[14]The degree of parameter perturbation can be guided using expert judgement,[15]or by directly estimating the degree of parameter uncertainty for a given model.[16] A traditionalparametrisationscheme seeks to represent the average effect of the sub grid-scale motion (e.g. convective clouds) on the resolved scale state (e.g. the large scale temperature and wind fields). A stochastic parametrisation scheme recognises that there may be many sub-grid scale states consistent with a particular resolved scale state. Instead of predicting the most likely sub-grid scale motion, a stochastic parametrisation scheme represents one possible realisation of the sub-grid. It does this through includingrandom numbersinto the equations of motion. This samples from theprobability distributionassigned to uncertain processes. Stochastic parametrisations have significantly improved the skill of weather forecasting models, and are now used in operational forecasting centres worldwide.[17]Stochastic parametrisations were first developed at theEuropean Centre for Medium Range Weather Forecasts.[18] When many different forecast models are used to try to generate a forecast, the approach is termed multi-model ensemble forecasting. This method of forecasting can improve forecasts when compared to a single model-based approach.[19]When the models within a multi-model ensemble are adjusted for their various biases, this process is known as "superensemble forecasting". This type of a forecast significantly reduces errors in model output.[20]When models of different physical processes are combined, such as combinations of atmospheric, ocean and wave models, the multi-model ensemble is called hyper-ensemble.[21] The ensemble forecast is usually evaluated by comparing the ensemble average of the individual forecasts for one forecast variable to the observed value of that variable (the "error"). This is combined with consideration of the degree of agreement between various forecasts within the ensemble system, as represented by their overallstandard deviationor "spread". Ensemble spread can be visualised through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is ameteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small, such that the observed atmospheric state falls outside of the ensemble forecast. This can lead the forecaster to be overconfident in their forecast.[22]This problem becomes particularly severe for forecasts of the weather about 10 days in advance,[23]particularly if model uncertainty is not accounted for in the forecast. The spread of the ensemble forecast indicates how confident the forecaster can be in his or her prediction. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the forecast in general.[22]When the spread is large, this indicates more uncertainty in the prediction. Ideally, aspread-skill relationshipshould exist, whereby the spread of the ensemble is a good predictor of the expected error in the ensemble mean. If the forecast isreliable,the observed state will behave as if it is drawn from the forecast probability distribution. Reliability (orcalibration) can be evaluated by comparing the standard deviation of the error in the ensemble mean with the forecast spread: for a reliable forecast, the two should match, both at different forecast lead times and for different locations.[24] The reliability of forecasts of a specific weather event can also be assessed. For example, if 30 of 50 members indicated greater than 1 cm rainfall during the next 24 h, theprobability of exceeding1 cm could be estimated to be 60%. The forecast would be considered reliable if, considering all the situations in the past when a 60% probability was forecast, on 60% of those occasions did the rainfall actually exceed 1 cm. In practice, the probabilities generated from operational weather ensemble forecasts are not highly reliable, though with a set of past forecasts (reforecastsorhindcasts) and observations, the probability estimates from the ensemble can be adjusted to ensure greater reliability. Another desirable property of ensemble forecasts isresolution.This is an indication of how much the forecast deviates from the climatological event frequency – provided that the ensemble is reliable, increasing this deviation will increase the usefulness of the forecast. This forecast quality can also be considered in terms ofsharpness, or how small the spread of the forecast is. The key aim of a forecaster should be to maximise sharpness, while maintaining reliability.[25]Forecasts at long leads will inevitably not be particularly sharp (have particularly high resolution), for the inevitable (albeit usually small) errors in the initial condition will grow with increasing forecast lead until the expected difference between two model states is as large as the difference between two random states from the forecast model's climatology. If ensemble forecasts are to be used for predicting probabilities of observed weather variables they typically need calibration in order to create unbiased and reliable forecasts. For forecasts of temperature one simple and effective method of calibration islinear regression, often known in this context asmodel output statistics. The linear regression model takes the ensemble mean as a predictor for the real temperature, ignores the distribution of ensemble members around the mean, and predicts probabilities using the distribution of residuals from the regression. In this calibration setup the value of the ensemble in improving the forecast is then that the ensemble mean typically gives a better forecast than any single ensemble member would, and not because of any information contained in the width or shape of the distribution of the members in the ensemble around the mean. However, in 2004, a generalisation of linear regression (now known asNonhomogeneous Gaussian regression) was introduced[26]that uses a linear transformation of the ensemble spread to give the width of the predictive distribution, and it was shown that this can lead to forecasts with higher skill than those based on linear regression alone. This proved for the first time that information in the shape of the distribution of the members of an ensemble around the mean, in this case summarized by the ensemble spread, can be used to improve forecasts relative tolinear regression. Whether or not linear regression can be beaten by using the ensemble spread in this way varies, depending on the forecast system, forecast variable and lead time. In addition to being used to improve predictions of uncertainty, the ensemble spread can also be used as a predictor for the likely size of changes in the mean forecast from one forecast to the next.[27]This works because, in some ensemble forecast systems, narrow ensembles tend to precede small changes in the mean, while wide ensembles tend to precede larger changes in the mean. This has applications in the trading industries, for whom understanding the likely sizes of future forecast changes can be important. The Observing System Research and Predictability Experiment(THORPEX) is a 10-year international research and development programme to accelerate improvements in the accuracy of one-day to two-week high impact weather forecasts for the benefit of society, the economy and the environment. It establishes an organizational framework that addresses weather research and forecast problems whose solutions will be accelerated through international collaboration among academic institutions, operational forecast centres and users of forecast products. One of its key components isTHORPEX Interactive Grand Global Ensemble(TIGGE), a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. Centralized archives of ensemble model forecast data, from many international centers, are used to enable extensivedata sharingand research.
https://en.wikipedia.org/wiki/Ensemble_forecasting
Essentially contested conceptrefers to abstract terms or phrases that provide value judgements which can be contested. The termessentially contested conceptwas proposed to facilitate an understanding of the different interpretations of abstractions that havequalitativeandevaluativenotions[1]—such as "art", "philanthropy",[2]"power",[3]and "social justice". The notion of essentially contested concept was proposed in 1956 byWalter Bryce Gallie.[4][5] Essentially contested concepts involve agreed on abstractconceptsor phrases, but whose usage and interpretation is disputable by others (e.g. "social justice", "This picture is a work of art").[4][6]They are abstract concepts whose “proper use of which inevitably involves endless disputes about their proper uses on the part of their users",[7]and these disputes "cannot be settled by appeal toempirical evidence, linguistic usage, or the canons of logic alone".[8]Usually, essentially contested concepts are found in the social sciences where confusion arises due to experts using terminology inconsistently and often failing to specify the relationship between an abstracttermand themeaningof that term.[9] For example, in historical studies, it has been observed that there are no particular standards for historical topics such as religion, art, science, democracy, and social justice as these are by their nature 'essentially contested' fields, such that they require diverse tools particular to each field beforehand in order to interpret topics from those subjects. When scholars talk about "religion", "art", "science" democracy" etc, there is no one definition of such terms that are generally accepted, and thus are essentially contested by default among scholars themselves.[10] Although Gallie's term is widely used to denote imprecise use oftechnical terminology, it has a far more specific application; although the notion could be misleadingly and evasively used to justify "agreeing to disagree",[11]the term offers something more valuable: The disputes that attend an essentially contested concept are driven by substantive disagreements over a range of different, entirely reasonable (although perhaps mistaken) interpretations of a mutually-agreed-uponarchetypicalnotion, such as the legal precept "treat like cases alike; and treat different cases differently", with "each party [continuing] to defend its case with what it claims to be convincing arguments, evidence and other forms of justification".[13] Gallie speaks of how "This picture is painted inoils" can be successfully contested if the work is actually painted intempera;[14]while "This picture is a work of art" may meet strong opposition due to disputes over what "work of art" denotes. He suggests three avenues whereby one might resolve such disputes: Otherwise, the dispute probably centres onpolysemy.[15]Here, a number of critical questions must be asked: Barry Clarke suggested that, in order to determine whether a particular dispute was a consequence of truepolysemyorinadvertenthomonymy, one should seek to "locate the source of the dispute"; and in doing so, one might find that the source was "within the concept itself", or "[within] some underlying non-conceptual disagreement between the contestants".[17] Clarke drew attention to the substantial differences between the expressions "essentially contested" and "essentially contestable", that were being extensively used within the literature as if they were interchangeable. Clarke argued that to state that a concept is merely "contested" is to "attribute significance to the contest rather than to the concept". Yet, to state that a concept is "contestable" (rather than "merely contested") is to "attribute some part of any contest to the concept"; namely, "to claim that some feature or property of the concept makes it polysemantic, and that [from this] the concept contains some internal conflict of ideas"; and it's this state of affairs that provides the "essentially contestable concept" with its "inherent potential [for] generating disputes".[18] In 1956 Gallie proposed a set of seven conditions for the existence of an essentially contested concept.[19]Gallie was very specific about the limits of his enterprise: it dealt exclusively with abstract, qualitative notions, such asart,religion,science,democracy, andsocial justice[20](and, if Gallie's choices are contrasted with negatively regarded concepts such asevil,disease,superstition, etc., it is clear that the concepts he chose were exclusively positively regarded). Freeden remarks that "not all essentially contested concepts signify valued achievements; they may equally signify disapproved and denigrated phenomena,"[21]and Gerring[22]asks us to imagine just how difficult it would be to "[try] to craft definitions of slavery, fascism, terrorism, or genocide without recourse to 'pejorative' attributes." These features distinguish Gallie's "essentially contested concepts" from others, "which can be shown, as a result of analysis or experiment, to be radically confused";[23]or, as Gray[24]would have it, they are the features that relate to the task of distinguishing the "general words, which really denote an essentially contested concept" from those other "general words, whose uses conceal a diversity of distinguishable concepts." The following are extensions of Gallie's original seven features that have been made by various scholars from across multiple disciplines: Scholars such asH. L. A. Hart,John Rawls,Ronald Dworkin, andSteven Lukeshave variously embellished Gallie's proposal by arguing that certain of the difficulties encountered with Gallie's proposition may be due to the unintendedconflationof two separate domains associated with the termconcept: In essence, Hart (1961), Rawls (1971), Dworkin (1972), and Lukes (1974) distinguished between the "unity" of a notion and the "multiplicity" of its possible instantiations. From their work it is easy to understand the issue as one of determining whether there is a single notion that has a number of different instantiations, or whether there is more than one notion, each of which is reflected in a differentusage. In a section of his 1972 article inThe New York Review of Books, Dworkin used the example of "fairness" to isolate and elaborate the difference between aconcept(suum cuique) and itsconception(various instantiations, for exampleutilitarian ethics).[37] He supposes that he has instructed his children not to treat others "unfairly" and asks us to recognize that, whilst he would have undoubtedly had particular "examples" (of the sorts of conduct he was intending to discourage) in mind at the time he spoke to his children, whatever it was that hemeantwhen he issued such instructions was not confined to those "examples" alone, for two reasons: Dworkin argues that this admission of error would not entail any "change" to his original instructions, because the truemeaningof his instructions was that "[he] meant the family to be guided by theconceptof fairness, not by any specificconceptionof fairness [that he] might have had in mind". Therefore, he argues, his instructions do, in fact, "cover" this new case. Exploring what he considers to be the "crucial distinction" between the overallconceptof "fairness" and some particular, and specificconceptionof "fairness", he asks us to imagine a group whose members share the view that certain acts areunfair.[38]The members of this group "agree on a great number of standard cases of unfairness and use these as benchmarks against which to test other, more controversial cases". In these circumstances, says Dworkin, "the group has aconceptof unfairness, and its members may appeal to that concept in moral instruction or argument."[39]However, the members may still disagree over many of these "controversial cases"; and differences of this sort indicate that membershave, oract upon, entirely different theories of why and how each of the "standard cases" are, in fact, genuine acts of "unfairness". And, because each considers that certain principles "[which] must be relied upon to show that a particular division or attribution is unfair" are more "fundamental" than certain other principles, it can be said that members of the group have differentconceptionsof "fairness". Consequently, those responsible for giving "instructions", and those responsible for setting "standards" of "fairness", in this community may be doing one of two things: It is important to recognize that rather than it just being a case of delivering two different instructions; it is a case of delivering two differentkindsof instruction: As a consequence, according to Dworkin, whenever an appeal is made to "fairness", a moral issue is raised; and, whenever a conception of "fairness" is laid down, an attempt is being made to answer that moral issue. Whilst Gallie's expression "essentially contested concepts" precisely denotes those "essentially questionable and corrigible concepts" which "are permanently and essentially subject to revision and question",[42]close examination of the wide and varied and imprecise applications of Gallie's term subsequent to 1956, by those who have ascribed their own literal meaning to Gallie's term without ever consulting Gallie's work, have led many philosophers to conclude that "essentially disputed concepts" would have been far better choice for Gallie's meaning, for at least three reasons: Jeremy Waldron's research has revealed that Gallie's notionhas "run wild" in the law review literature over the ensuing 60 yearsand is, now, being widely used to denote something like "very hotly contested, with no resolution in sight",[46]due to an entirely mistaken view[47]that theessentialin Gallie's term is an "intensifier", when, in fact, "[Gallie's] term 'essential' refers to the location of the disagreement or indeterminacy; it is contestation at the core, not just at the borderlines or penumbra of a concept".[48]Yet, according to Gallie, is also clear that:
https://en.wikipedia.org/wiki/Essentially_contested_concept
Kasami sequencesare binarysequencesof length2N−1whereNis an even integer. Kasami sequences have goodcross-correlationvalues approaching theWelch lower bound. There are two classes of Kasami sequences—the small set and the large set. The process of generating a Kasami sequence is initiated by generating amaximum length sequencea(n), wheren= 1…2N−1. Maximum length sequences are periodic sequences with a period of exactly2N−1. Next, a secondary sequence is derived from the initial sequence via cyclic decimation sampling asb(n) =a(q⋅n), whereq= 2N/2+1. Modified sequences are then formed by addinga(n)and cyclically time shifted versions ofb(n)using modulo-two arithmetic, which is also termed theexclusive or(xor) operation. Computing modified sequences from all2N/2unique time shifts ofb(n)forms the Kasami set of code sequences. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Kasami_sequence
Ininformation theory, thenoisy-channel coding theorem(sometimesShannon's theoremorShannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete data (digitalinformation) nearly error-free up to a computable maximum rate through the channel. This result was presented byClaude Shannonin 1948 and was based in part on earlier work and ideas ofHarry NyquistandRalph Hartley. TheShannon limitorShannon capacityof a communication channel refers to the maximumrateof error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon andWarren WeaverentitledThe Mathematical Theory of Communication(1949). This founded the modern discipline ofinformation theory. Stated byClaude Shannonin 1948, the theorem describes the maximum possible efficiency oferror-correcting methodsversus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications anddata storage. This theorem is of foundational importance to the modern field ofinformation theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in (Feinstein 1954). The Shannon theorem states that given a noisy channel withchannel capacityCand information transmitted at a rateR, then ifR<C{\displaystyle R<C}there existcodesthat allow theprobability of errorat the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate,C. The converse is also important. IfR>C{\displaystyle R>C}, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. The channel capacityC{\displaystyle C}can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using theShannon–Hartley theorem. Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such asReed–Solomon codesand, more recently,low-density parity-check(LDPC) codes andturbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today'sdigital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045 dB of the Shannon limit (for binaryadditive white Gaussian noise(AWGN) channels, with very long block lengths).[1] The basic mathematical model for a communication system is the following: AmessageWis transmitted through a noisy channel by using encoding and decoding functions. AnencodermapsWinto a pre-defined sequence of channel symbols of lengthn. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into adecoderwhich maps the sequence into an estimate of the message. In this setting, the probability of error is defined as: Theorem(Shannon, 1948): (MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11) As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds. The following outlines are only one set of many different styles available for study in information theory texts. This particular proof of achievability follows the style of proofs that make use of theasymptotic equipartition property(AEP). Another style can be found in information theory texts usingerror exponents. Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below thechannel capacity. By an AEP-related argument, given a channel, lengthn{\displaystyle n}strings of source symbolsX1n{\displaystyle X_{1}^{n}}, and lengthn{\displaystyle n}strings of channel outputsY1n{\displaystyle Y_{1}^{n}}, we can define ajointly typical setby the following: We say that two sequencesX1n{\displaystyle {X_{1}^{n}}}andY1n{\displaystyle Y_{1}^{n}}arejointly typicalif they lie in the jointly typical set defined above. Steps The probability of error of this scheme is divided into two parts: Define:Ei={(X1n(i),Y1n)∈Aε(n)},i=1,2,…,2nR{\displaystyle E_{i}=\{(X_{1}^{n}(i),Y_{1}^{n})\in A_{\varepsilon }^{(n)}\},i=1,2,\dots ,2^{nR}} as the event that message i is jointly typical with the sequence received when message 1 is sent. We can observe that asn{\displaystyle n}goes to infinity, ifR<I(X;Y){\displaystyle R<I(X;Y)}for the channel, the probability of error will go to 0. Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel. Suppose a code of2nR{\displaystyle 2^{nR}}codewords. Let W be drawn uniformly over this set as an index. LetXn{\displaystyle X^{n}}andYn{\displaystyle Y^{n}}be the transmitted codewords and received codewords, respectively. The result of these steps is thatPe(n)≥1−1nR−CR{\displaystyle P_{e}^{(n)}\geq 1-{\frac {1}{nR}}-{\frac {C}{R}}}. As the block lengthn{\displaystyle n}goes to infinity, we obtainPe(n){\displaystyle P_{e}^{(n)}}is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C. A strong converse theorem, proven by Wolfowitz in 1957,[3]states that, for some finite positive constantA{\displaystyle A}. While the weak converse states that the error probability is bounded away from zero asn{\displaystyle n}goes to infinity, the strong converse states that the error goes to 1. Thus,C{\displaystyle C}is a sharp threshold between perfectly reliable and completely unreliable communication. We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver. Then the channel capacity is given by The maximum is attained at the capacity achieving distributions for each respective channel. That is,C=liminf1n∑i=1nCi{\displaystyle C=\lim \inf {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}whereCi{\displaystyle C_{i}}is the capacity of the ithchannel. The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in theasymptotic equipartition propertyarticle. The technicality oflim infcomes into play when1n∑i=1nCi{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}does not converge.
https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem
English determiners(also known asdeterminatives)[1]: 354arewords– such asthe,a,each,some,which,this, and numerals such assix– that are most commonly used withnounsto specify theirreferents. Thedeterminersform a closedlexical categoryinEnglish.[2] The syntactic role characteristically performed by determiners is known as the determinative function (see§ Terminology).[3]A determinative combines with a noun (or, more formally, a nominal; seeEnglish nouns § Internal structure) to form anoun phrase(NP). This function typically comes before anymodifiersin the NP (e.g.,somevery pretty wool sweaters, not*very pretty some wool sweaters[a]). The determinative function is typically obligatory in a singular, countable, common noun phrase (compareI haveanew catto *I have new cat). Semantically, determiners are usuallydefinite or indefinite(e.g.,thecatversusacat),[4]and they often agree with thenumberof theheadnoun (e.g.,anew catbut not *manynew cat).Morphologically, they are usually simple and do not inflect. The most common of these are the definite and indefinitearticles,theanda(n). Other determiners in English include thedemonstrativesthisandthat, and thequantifiers(e.g.,all,many, andnone) as well as thenumerals.[1]: 373Determiners also occasionally function as modifiers in noun phrases (e.g.,themanychanges), determiner phrases (e.g.,manymore) or inadjectiveoradverb phrases(e.g.,notthatbig).[1]: 565They may appear on their own without a noun, similar topronouns(e.g.,I'll havesome), but they are distinct from pronouns.[1]: 412 Words and phrases can be categorized by both their syntactic category[b]and theirsyntactic function. In the clausethe dog bit the man, for example,the dogbelongs to the syntactic category of noun phrase and performs the syntactic function of subject. The distinction between category and function is at the heart of a terminological issue surrounding the worddeterminer: various grammars have used the word to describe a category, a function, or both. Some sources, such asA Comprehensive Grammar of the English Language, usedetermineras a term for a category as defined above anddeterminativefor the function that determiners andpossessivestypically perform in a noun phrase (see§ Functions).[5]: 74Others, such asThe Cambridge Grammar of the English Language(CGEL), make the opposite terminological choice.[1]: 354And still others (e.g.,The Grammar Book[6]) usedeterminerfor both the category and the function. This article usesdeterminerfor the category anddeterminativefor the function in the noun phrase. The lexical category determiner is the class of words described in this article. They head determiner phrases, which can realize the functions determinative, predeterminative, and modifier: The syntactic function determinative is a function that specifies a noun phrase. That is, determinatives add abstract meanings to the noun phrase, such as definiteness, proximity, number, and the like.[7]: 115While the determinative function is typically realized by determiner phrases, they may also be realized by noun phrases and prepositional phrases: This article is about determiners as a lexical category. Traditional grammarhas no concept to match determiners, which are instead classified asadjectives, articles, or pronouns.[5]: 70Thearticlesand demonstratives have sometimes been seen as forming their own category, but are often classified as adjectives. Linguist and historianPeter Matthewsobserves that the assumption that determiners are distinct from adjectives is relatively new, "an innovation of … the early 1960s."[5]: 70 In 1892, prior to the emergence of the determiner category in English grammars,Leon Kellner, and later Jespersen,[8]discussed the idea of "determination" of a noun: In Old English the possessive pronoun, or, as the French say, "pronominal adjective," expresses only the conception of belonging and possession; it is a real adjective, and does not convey, as at present, the idea of determination. If, therefore, Old English authors want to make nouns preceded by possessive pronouns determinative, they add the definite article.[9] By 1924,Harold Palmerhad proposed a part of speech called "Pronouns and Determinatives", effectively "group[ing] with the pronouns all determinative adjectives (e.g., article-like, demonstratives, possessives, numerals, etc.), [and] shortening the term to determinatives (the "déterminatifs" of the French grammarians)."[10]: 24Palmer separated this category from more prototypical adjectives (what he calls "qualificative adjectives") because, unlike prototypical adjectives, words in this category are not used predicatively, tend not to inflect for comparison, and tend not to be modified.[10]: 45 In 1933,Leonard Bloomfieldintroduced the termdeterminerused in this article, which appears to define a syntactic function performed by "limiting adjectives".[11] Our limiting adjectives fall into two sub-classes of determiners and numeratives … The determiners are defined by the fact that certain types of noun expressions (such ashouseorbig house) are always accompanied by a determiner (as,this house,a big house).[12]: 203 Matthews argues that the next important contribution was by Ralph B. Long in 1961, though Matthews notes that Long's contribution is largely ignored in the bibliographies of later prominent grammars, includingA Comprehensive Grammar of the English LanguageandCGEL. Matthews illustrates Long's analysis with the noun phrasethis boy: "thisis no longer, in [Long's] account, an adjective. It is instead a pronoun, of a class he called ‘determinative’, and it has the function of a ‘determinative modifier’."[5]: 71This analysis was developed in a 1962 grammar by Barbara M. H. Strang[5]: 73and in 1972 byRandolph Quirkand colleagues.[5]: 74In 1985,A Comprehensive Grammar of the English Languageappears to have been the first work to explicitly conceive of determiner as a distinct lexical category.[5]: 74 Until the late 1980s, linguists assumed that, in a phrase likethe red ball, theheadwas the nounballand thatthewas adependent. But a student at MIT named Paul Abney proposed, in his PhD dissertation aboutEnglish noun phrases(NPs) in 1987, that the head was not the nounballbut the determinerthe, so thatthe red ballis a determiner phrase (DP).[13]This has come to be known as the DP analysis or the DP hypothesis (seeDeterminer phrase), and as of 2008[update]it is the majority view ingenerative grammar,[14]: 93though it is rejected in other perspectives.[15] The main similarity between adjectives and determiners is that they can both appear immediately before nouns (e.g.,many/happypeople). The key difference between adjectives and determiners in English is that adjectives cannot function as determinatives. The determinative function is an element in NPs that is obligatory in most singular countable NPs and typically occurs before any modifiers (see§ Functions). For example,*I live insmall houseis ungrammatical becausesmall houseis a singular countable NP lacking a determinative. The adjectivesmallis a modifier, not a determinative. In contrast, if the adjective is replaced or preceded by a possessive NP (I live inmyhouse) or a determiner (I live inthathouse), then it becomes grammatical because possessive NPs and determiners function as determinatives.[1]: 538 There are a variety of other differences between the categories. Determiners appear inpartitiveconstructions, while adjectives do not (e.g.,someof the peoplebut not*happyof the people).[1]: 356Adjectives can function as a predicative complement in a verb phrase (e.g.,that waslovely), but determiners typically cannot (e.g.,*that was every).[1]: 253Adjectives are not typically definite or indefinite, while determiners are.[1]: 54Adjectives as modifiers in a noun phrase do not need to agree in number with a head noun (e.g.,old book,old books) while some determiners do (e.g.,thisbook,thesebooks).[1]: 56Morphologically, adjectives often inflect for grade (e.g.,big,bigger,biggest), while few determiners do.[1]: 356Finally, adjectives can typically form adverbs by adding-ly(e.g.,cheap→cheaply), while determiners cannot.[1]: 766 The boundary between determiner and adjective is not always clear, however. In the case of the wordmany, for example, the distinction between determiner and adjective is fuzzy, and different linguists and grammarians have placed this term into different categories.The CGELcategorizesmanyas a determiner because it can appear in partitive constructions, as inmany of them.[1]: 539Alternatively, Bas Aarts offers three reasons to support the analysis ofmanyas an adjective. First, it can be modified byvery(as inhis very many sins), which is a characteristic typical of certain adjectives but not of determiners. Second, it can occur as a predicative complement:his sins are many. Third,manyhas a comparative and superlative form (moreandmost, respectively).[16]: 126 There is disagreement about whether possessive words such asmyandyourare determiners or not. For example,CollinsCOBUILDGrammar[17]: 61classifies them as determiners whileCGELclassify them as pronouns[1]: 357andA Comprehensive Grammar of the English Languagehas them dually classified as determiners[18]: 253and as pronouns in determinative function.[18]: 361 The main reason for classifying these possessive words as determiners is that, like determiners, they usually function as determinative in an NP (e.g.,my/the cat).[1]: 357Reasons for calling thempronounsand not determiners include that the pronouns typically inflect (e.g.,I, me, my, mine, myself),[1]: 455while determiners typically allow no morphological change.[1]: 356Determiners also appear inpartitiveconstructions, while pronouns do not (e.g.,someof the peoplebut not*my of the people).[1]: 356Also, some determiners can be modified by adverbs (e.g.,verymany), but this is not possible for pronouns.[1]: 57 The wordsyouandweshare features commonly associated with both determiners and pronouns in constructions such aswe teachers do not get paid enough. On the one hand, the phrase-initial position of these words is a characteristic they share with determiners (comparethe teachers). Furthermore, they cannot combine with more prototypical determiners (*the we teachers), which suggests that they fill the same role.[16]: 125These characteristics have led linguists and grammarians likeRay Jackendoffand Steven Paul Abney to categorize such uses ofweandyouas determiners.[19][13][1]: 374 On the other hand, these words can show case contrast (e.g.,us teachers), a feature that, in Modern English, is typical of pronouns but not of determiners.[16]: 125Thus, Evelyne Delorme andRay C. Doughertytreat words likeusas pronouns inappositionwith the noun phrases that follow them, an analysis thatMerriam–Webster's Dictionary of English Usagealso follows.[20][21]Richard Hudsonand Mariangela Spinillo also categorize these words as pronouns but without assuming an appositive relationship between the pronoun and the rest of the noun phrase.[22][23] There is disagreement about whetherthatis a determiner or a degree adverb in clauses likeit is not that unusual. For example,A Comprehensive Grammar of the English Languagecategorizes this use ofthatas an adverb. This analysis is supported by the fact that other pre-head modifiers of adjectives that "intensify" their meaning tend to be adverbs, such asawfullyinawfully sorryandtoointoo bright.[18]: 445–447 On the other hand, Aarts categorizes this word as a determiner, a categorization also used inCGEL.[7]: 137[1]: 549This analysis can be supported by expanding the determiner phrase:it is not all that unusual.Allcan function as a premodifier of determiners (e.g.,all that cake) but not adjectives (e.g., *all unusual), which leads Aarts to suggest thatthatis a determiner.[16]: 127 Expressions with similar quantification meanings such asa lot of,lots of,plenty of,a great deal of,tons of, etc. are sometimes said to be determiners,[18]: 263while other grammars argue that they are not words, or even phrases. The non-determiner analysis is that they consist of the first part of a noun phrase.[1]: 349For example,a lot of workis a noun phrase withlotas its head. It has apreposition phrasecomplementbeginning with the prepositionof. In this view, they could be consideredlexicalunits, but they are not syntactic constituents. For the sake of this section, Abney's DP hypothesis(see§ History)is set aside. In other words, here a DP is taken to be a dependent in a noun phrase (NP) and not the other way around. A determiner phrase (DP) is headed by a determiner and optionally takes dependents. DPs can take modifiers, which are usually adverb phrases (e.g., [almostno]people) or determiner phrases (e.g., [manymore]people) .[1]: 431Comparative determiners likefewerormorecan takethanprepositional phrase (PP) complements (e.g.,it weighs[lessthan five]grams).[1]: 443The following tree diagram in the style ofCGELshows the DPfar fewer than twenty, with the adverbfaras a modifier and the PPthan twentyas a complement. As stated above, there is some terminological confusion about the terms "determiner" and "determinative". In this article, "determiner" is a lexical category while "determinative" is the function most typically performed by determiner phrases (in the same way that "adjective" denotes a category of words while "modifier" denotes the most typical function of adjective phrases). DPs are not the only phrases that can function as determinative, but they are the most common.[1]: 330 A determinative is a function only in noun phrases. It is usually the leftmostconstituentin the phrase, appearing before any modifiers.[24]A noun phrase may have many modifiers, but only one determinative is possible.[1]In most cases, a singular, countable, common noun requires a determinative to form a noun phrase; plurals and uncountables do not.[1]The determinative is underlined in the following examples: The most common function of a DP is determinative in an NP. This is shown in the followingsyntax treein the style ofCGEL. It features two determiner phrases,allin predeterminer modifier function (see§ Predeterminative), andthein determinative function (labeled Det:DP). If noun phrases can only contain one determinative, the following noun phrases present challenges: The determiner phrasethefunctions as the determinative inall the time, andthosefunctions as the determinative inboth those cars. Butallandbothalso have specifying roles rather than modifying roles in the noun phrase, much like the determinatives do. To account for noun phrases like these,A Comprehensive Grammar of the English Languagealso recognizes the function of predeterminative (or predeterminer).[18]: 257Some linguists and grammarians offer different accounts of these constructions.CGEL, for instance, classifies them as a kind of modifier in noun phrases.[1]: 433 Predeterminatives are typically realized by determiner phrases (e.g.,allinall the time). However, they can also be realized by noun phrases (e.g.,one-fifth the size) and adverb phrases (e.g., thrice the rate).[7]: 119–120 Determiner phrases can function as pre-head modifiers in noun phrases, such as the determiner phrasetwointhese two images. In this example,thesefunctions as the determinative of the noun phrase, andtwofunctions as a modifier of the headimages.[7]: 126And they can function as pre-head modifiers in adjective phrases—[AdjP[DPthe]more], [AdjP[DPthe]merrier]—and adverb phrases—[AdvP[DPthe]longer]this dish cooks,[AdvP[DPthe]better]it tastes).[1]: 549[7]: 137, 162 Determiner phrases can also function as post-head modifiers in these phrases. For example, the determinerseach,enough,less, andmorecan function as post-head modifiers of noun phrases, as in the determiner phraseeachintwo seats each.[7]: 132Enoughcan fill the same role in adjective phrases (e.g.,clear enough) and in adverb phrases (e.g.,funnily enough).[1]: 549[7]: 138, 163 DPs also function as modifiers in DPs (e.g., [notthatmany]people).[1]: 330 Determiners may bear two functions at one time. Usually this is a fusion of determinative and head in an NP where no head noun exists. In the clausemany would disagree, the determinermanyis the fused determinative-head in the NP that functions as the subject.[1]: 332In many grammars, both traditional and modern, and in almost all dictionaries, such words are considered to be pronouns rather than determiners. Several words can belong to the same part of speech but still differ from each other to various extents, with similar words forming subclasses of the part of speech. For example, the articlesaandthehave more in common with each other than with the demonstrativesthisorthat, but both belong to the class of determiner and, thus, share more characteristics with each other than with words from other parts of speech. Article and demonstrative, then, can be considered subclasses or types of determiners. Most determiners are very basic in their morphology, but some are compounds.[1]: 391A large group of these is formed with the wordsany,every,no, andsometogether withbody,one,thing, orwhere(e.g.,anybody,somewhere).[1]: 411The morphological phenomenon started inOld English, whenthing, was combined withsome,any, andno. InMiddle English, it would combine withevery.[25]: 165 The cardinal numbers greater than 99 are also compound determiners.[1]: 356This group also includesa fewanda little,[1]: 391and Payne, Huddleston, and Pullum argue thatonce,twice, andthricealso belong here, and not in the adverb category.[26] Although most determiners do not inflect, the following determiners participate in the system ofgrade.[1]: 393 The following types of determiners are organized, first, syntactically according to their typical position in a noun phrase in relation to each other and, then, according to their semantic contributions to the noun phrase. This first division, based on categorization fromA Comprehensive Grammar of the English Language, includes three categories: The secondary divisions are based on the semantic contributions of the determiner to a noun phrase. The subclasses are named according to the labels assigned inCGELand theOxford Modern English Grammar, which use essentially the same labels. According to CGEL,articlesserve as "the most basic expression of definiteness and indefiniteness."[1]: 368That is, while other determiners express definiteness and other kinds of meaning, articles serve primarily as markers of definiteness. The articles are generally considered to be:[27] Other articles have been posited, including unstressedsome, a zero article (indefinite with mass and plural) and a null article (definite with singular proper nouns).[28] The two maindemonstrativedeterminers arethisandthat. Their respective plural forms aretheseandthose.[27] The demonstrative determiners mark noun phrases as definite. They also add meaning related to spatialdeixis; that is, they indicate where the thing referenced by the noun is in relation to the speaker. The proximalthissignals that the thing is relatively close to the speaker while the distalthatsignals that the thing is relatively far.[1]: 373 CGEL classifies the archaic and dialectalyonder(as in the noun phraseyonder hills) as a marginal demonstrative determiner.[1]: 615Yondersignals that the thing referenced by the noun is far from the speaker, typically farther than whatthatwould signal. Thus, we would expect yonder hills to be farther from the speaker than those hills. Unlike the main demonstrative determiners,yonderdoes not inflect for number (compareyonder hill). The following are the distributive determiners:[27] The distributive determiners mark noun phrases as indefinite.[29]They also add distributive meaning; that is, "they pick out the members of a set singly, rather than considering them in mass."[18]: 382Because they signal this distributive meaning, these determiners select singular noun heads when functioning as determinatives in noun phrases (e.g.,each student).[1]: 378 The following are the existential determiners:[27] Existential determiners mark a noun phrase as indefinite. They also conveyexistential quantification, meaning that they assert the existence of a thing in a quantity greater than zero.[1]: 380 The following are the disjunctive determiners:[27] Disjunctive determiners mark a noun phrase as definite. They also imply a single selection from a set of exactly two.[1]: 387Because they signal a single selection, disjunctive determiners select singular nouns when functioning as determinatives in noun phrases (e.g.,either side).A Comprehensive Grammar of the English Languagedoes not recognize this category and instead labeleitheran "assertive determiner" andneithera "negative determiner."[18]: 257 Thenegativedeterminer isnowith its independent formnone.[27]Distinct dependent and independent forms are otherwise found only in possessive pronouns, where the dependent is only found with a subsequent noun and the independent without (e.g.,my wayandno wayare dependent, whilemineandnoneare independent). Nosignifies that not one member of a set or sub-quantity of a quantity under consideration has a particular property.Neitheralso conveys this kind of meaning but is only used when selecting from a set of exactly two, which is whyneitheris typically classified as disjunctive rather than negative.[1]: 389–390 The additive determiner isanother.[27]Anotherwas formed from the compounding of the indefinite articleanand the adjectiveother; thus, it marks a noun phrase as indefinite. It also conveys additive meaning. For example,another bananasignals an additional banana in addition to some first banana.Anothercan also mark an alternative. For example,another bananacan also signal a different banana, perhaps one that is riper. Because it can also convey this alternative meaning,anotheris sometimes labeled an alternative-additive determiner.[1]: 391 The following are the sufficiency determiners:[27] These determiners convey inexact quantification that is framed in terms of some minimum quantity needed. For instance,enough money for a taxiimplies that a minimum amount of money is necessary to pay for a taxi and that the amount of money in question is sufficient for the purpose. When functioning as determinatives in a noun phrase, sufficiency determiners select plural count nouns (e.g.,sufficient reasons) or non-count nouns (e.g.,enough money).[1]: 396 The following are theinterrogativedeterminers:[27] These determiners can also be followed by -everand -soever. Interrogative determiners are typically used in the formation of questions, as inwhat/which conductor do you like best?Usingwhatmarks a noun phrase as indefinite while usingwhichmarks the noun phrase as definite, being used when the context implies a limited number of choices.[18]: 369 The following are the relative determiners:[27] These determiners can also be followed by -ever. Relative determiners typically function as determiners in noun phrases that introducerelative clauses, as inwe can use whatever/whichever edition you want.[1]: 398 In grammars that consider them determiners rather than pronouns (see§ Determiners versus other lexical categories), the personal determiners are the following:[27] Though these words are normally pronouns, in phrases likewe teachersandyou guys, they are sometimes classified as personal determiners. Personal determiners mark a noun phrase as definite. They also add meaning related to personal deixis; that is, they indicate whether the thing referenced by the noun includes the speaker (we/us) or at least one addressee and not the speaker (you).[1]: 374In some dialects such as the Ozark dialect, this usage extends tothemas inthem folks.[30] The following are the universal determiners:[27] Universal determiners conveyuniversal quantification, meaning that they assert that no subset of a thing exists that lacks the property that is described. For example, saying "all the vegetables are ripe" is the same as saying "no vegetables are not ripe."[1]: 359The primary difference betweenallandbothis thatbothapplies only to sets with exactly two members whilealllacks this limitation. But CGEL notes that because of the possibility of usingbothinstead,all"generally stronglyimplicates'more than two.'"[1]: 374 Cardinal numerals (zero,one,two,thirty-four, etc.) can represent any number. Therefore, the members of this subclass of determiner are infinite in quantity and cannot be listed in full. Cardinal numerals are typically thought to express the exact number of the things represented by the noun, but this exactness is throughimplicaturerather than necessity. In the clausefive people complained, for example, the number of people complaining is usually thought to be exactly five. But technically, the proposition would still be true if additional people were complaining as well: if seven people were complaining, then it is also necessarily true that five people were complaining. General norms of cooperative conversation, however, make it such that cardinal numerals typically express the exact number (e.g., five = no more and no less than five) unless otherwise modified (e.g.,at least fiveorat most five).[1]: 385–386 The following are the positive paucal determiners:[27] The positive paucal determiners convey a small, imprecise quantity—generally characterized as greater than two but smaller than whatever quantity is considered large. When functioning as determinatives in a noun phrase, most paucal determiners select plural count nouns (e.g.,a few mistakes), buta littleselects non-count nouns (e.g.,a little money).[1]: 391–392 In grammars that consider them determiners rather than adjectives (see§ Determiners versus other lexical categories), the degree determiners are the following:[27] Degree determiners mark a noun phrase as indefinite. They also convey imprecise quantification, withmanyandmuchexpressing a large quantity andfewandlittleexpressing a small quantity. Degree determiners are unusual in that they inflect for grade, a feature typical of adjectives and adverbs but not determiners. The comparative forms offew,little,many, andmucharefewer,less,more, andmorerespectively. The superlative forms arefewest,least,most, andmostrespectively.[1]: 393The plain forms can be modified with adverbs, especiallyvery,tooandso(andnotcan also be added). Note that unmodifiedmuchis quite rarely used in affirmative statements in colloquial English. The mainsemanticcontributions of determiners arequantificationanddefiniteness. Many determiners express quantification.[31][1]: 358 From a semantic point of view, adefiniteNP is one that is identifiable and activated in the minds of thefirst personand the addressee. From a grammatical point of view in English, definiteness is typically marked by definite determiners, such asthe,that, andthis,all,every,both, etc. Linguists find it useful to make a distinction between the grammatical feature of definiteness and the cognitive feature of identifiability.[32]: 84This accounts for cases ofform-meaning mismatch, where a definite determiner results in an indefinite NP, such as the exampleI metthis guy from Heidelbergon the train, where the underlined NP is grammatically definite but semantically indefinite.[32]: 82 The majority of determiners, however, are indefinite. These include the indefinite articlea, but also most quantifiers, including the cardinal numerals. Choosing the definite article over no article in a pair likethe AmericansandAmericanscan have thepragmaticeffect of depicting "the group as a monolith of which the speaker is not a part."[33]Relatedly, the choice betweenthisandthatmay have an evaluative purpose, wherethissuggest a closeness, and therefore a more positive evaluation.[34]
https://en.wikipedia.org/wiki/English_determiners
Fluency Voice Technologywas a company that developed and sold packagedspeech recognitionsolutions for use incall centers. Fluency's Speech Recognition solutions are used by call centers worldwide to improve customer service and significantly reduce costs and are available on-premises and hosted. 1998– Fluency was created as a spin-off from the Voice Research & Development team of a company called netdecisions. ThisR&Doperation was established in Cambridge UK. The focus of the development was speech recognition systems based on theVXMLstandard.2001– Fluency became a separate entity in May 2001. Fluency began the creation of a software development platform specifically aimed at automating call center activities. This platform became Fluency's VoiceRunner.2002 to 2004– Fluency establishes accomplishes many successful deployments in customer sites such as National Express and Barclaycard.2003– Fluency expanded into the USA. Fluency also acquiresVocalisof Cambridge, UK in August 2003.2004– Fluency receives £6 million investment from leading European Venture Capitalists and establishes a globalOEMpartnership withAvaya, and the acquisition of SRC Telecom.2008– Fluency is acquired by Syntellect Ltd Call Centers around the world use Fluency to improve service and reduce costs. They includeTravelodge,Standard Life Bank,Sutton and East Surrey Water,Pizza Hut,CWT,Barclays,Powergen,First Choice,OutRight,J D Williams,Capital Blue Cross,Chelsea Building Society,EDF,bss,TV LicensingandCapita Software Services.
https://en.wikipedia.org/wiki/Fluency_Voice_Technology
ALanguage exchangeis a relationship between two or more people who have interactions around the exchange of language.[1]People typically join into a language exchange to gain practice in a target language. Other reasons for joining might include cultural exchange or companionship.[2]Partners of a language exchange are usually native speakers of each other’s target language. Meetings between language exchange partners can be held in person or via videoconferencing platforms. Potential challenges of language exchanges can involve differing motivations, cultural miscommunications or scheduling conflicts. Language exchanges are sometimes calledTandem language learning.[3] In modern contexts, a language exchange most often refers to the mutual teaching of partners'first languages. Language exchanges are generally considered helpful for developinglanguage proficiency, especially in speaking fluency and listening comprehension. Language exchanges that take place through writing or text chats also improve reading comprehension and writing ability. The aim of language exchange is to develop and increase language knowledge and intercultural skills.[4]This is usually done through social interaction with the native speaker.[4]Given that language exchanges generally take place between native speakers of different languages, they may also improve participants' cross-cultural communication skills. This practice has long been used by individuals to exchange knowledge of foreign languages. For example,John MiltongaveRoger Williamsan opportunity to practiseHebrew,Greek,Latin, andFrench, while receiving lessons inDutchin exchange.[5]Language exchange first came about in the early 1800s where school aged children in England were introduced to the newly set up program.[6]Countries such as Belgium and Switzerland found the language exchange program very easy to run as there were many languages spoken in the one country.[6]French and German youth picked up language exchange in 1968 which then spread to Turkey and Madrid. American universities are increasingly experimenting with language exchanges as part of the language learning curriculum.[7]In this respect, language exchanges have a similar role asstudy abroadprograms andlanguage immersionprograms in creating an environment where the language student must use the foreign language for genuine communication outside of a classroom setting. In such programs, international and American students can be paired up with one another so they may then freely organize meetings that permit opportunities for communication and intercultural exchange.[8]In other examples of university language exchange programs students may join for practices like language tutoring, conversation groups, or social gatherings.[9] Most language exchanges are set up through language learning websites and applications with platforms that accommodate the search and selection of potential language partners. Many of these networks offer the opportunity for language partner selections based not only on target language, but also country of origin, gender, age, and language proficiency level of a potential partner.[10]Examples of these includeHelloTalk,Tandem, andConversation Exchange. Language learning social networks offer language students the opportunity to find language partners from around the world. Many such platforms allow language exchange partners to text, as well as speak to one another through voice or video calls. Partners may also decide to communicate viainstant messengers,voice-over-IP technologies, or other telecommunications platforms. Location and means permitting, connected partners may also later elect to meet in person. Advances in language learning social networks have provided an outlet for foreign language students who previously had difficulty locating opportunities to practice their target language. Language exchange platforms often offer a wealth of eligible partners, with some boasting as many as several million users.[11]The diversity among the countries of origin for potential partners can mean the opportunity to experience a myriad of linguistic and cultural exchanges.[12] Language exchanges have been viewed as a helpful tool to aid language learning at learning institutions and among individual learners. The benefit of most language exchanges is that they are often performed between native speakers. Practice with native speakers can not only provide more robust opportunities for feedback regarding linguistic elements such as pronunciation, grammar, and vocabulary, but also authentic listening practice.[13] Another major benefit of language exchange is the exposure to the native speaker's culture.[14]Not only does learning about the culture of locations where one’s target language is spoken enhance their overall linguistic abilities, it can also serve to broaden their intercultural communication skills.[15] Language exchanges can provide a friendly and informal environment for new language learners. Both speakers are trying to learn and understand, and such an atmosphere can reduce pressure on either partner.[16]This also gives the learning environment a fun and productive atmosphere. An additional benefit is that people are learning faster when they have a one-on-one connection with the "teacher". Many people choose to learn one-on-one but struggle try to find a teacher. People like this are highly motivated to learn a new language. The native speakers who are helping these people may feel a new sense of motivation since they are now responsible for teaching this person.[6][14] Because both partners of a language exchange are generally seeking help with their language skills, usually neither partner compensates the other for the assistance they receive. A setup whereby only one partner provides help or is compensated for their services would typically not be referred to as a language exchange. Online relationships can give rise to many of the same complications that may exist in real-life relationships. Sometimes remote language partners can have different motivations for joining into a language exchange. It can be disappointing when a partner’s goals for the relationship conflict with one's own; such disagreement of purpose can lead to an end of a language partnership.[17] Personality mismatches can be as prevalent in online relationships as they can be in offline ones. Unresolved incompatibility issues can be even more difficult to overcome in remote relationships, however, and related factors can cause one or more participants of a language exchange to either gradually or abruptly withdraw from the relationship.[18] Miscommunications can occur in any type of relationship, but they can be even more common between people from different cultures. Those who either anticipate or are otherwise prepared to deal with such misunderstandings may be better equipped for navigating online relationships with people of other cultures.[19] Scheduling difficulties can exist between language partners from different regions throughout the world. Meetings between people located in different time zones can be an inconvenient fact of some language exchanges. In such cases, partners may need to compromise to select a meeting time which is not too disruptive to either person’s schedule.[20]
https://en.wikipedia.org/wiki/Language_exchange
Banking secrecy,[1][2]alternatively known asfinancial privacy,banking discretion, orbank safety,[3][4]is aconditional agreementbetween a bank and its clients that all foregoing activities remain secure,confidential, and private.[5]Most often associated withbanking in Switzerland, banking secrecy is prevalent inLuxembourg,Monaco,Hong Kong,Singapore,Ireland, andLebanon, among otheroff-shore banking institutions. Otherwise known asbank–client confidentialityorbanker–client privilege,[6][7]the practice was started byItalian merchantsduring the 1600s nearNorthern Italy(a region that would become theItalian-speaking regionof Switzerland).[8]Geneva bankers established secrecysocially and through civil law in theFrench-speaking regionduring the 1700s.Swissbanking secrecy was first codified with theBanking Act of 1934, thus making it a crime to disclose client information to third parties without a client's consent. The law, coupled with a stable Swiss currency and international neutrality, prompted large capital flight to private Swiss accounts. During the 1940s,numbered bank accountswere introduced creating an enduring principle of bank secrecy that continues to be considered one of the main aspects ofprivate bankingglobally. Advances infinancial cryptography(viapublic-key cryptography) could make it possible to use anonymous electronic money and anonymous digital bearer certificates for financial privacy and anonymous Internet banking, given enabling institutions and secure computer systems.[9] While some banking institutions voluntarily impose banking secrecy institutionally, others operate in regions where the practice is legally mandated and protected (e.g.off-shore financial centers). Almost all banking secrecy standards prohibit the disclosure of client information to third parties without consent or an acceptedcriminal complaint. Additional privacy is provided to select clients vianumbered bank accountsor underground bank vaults. Recent research has indicated that the use of offshore financial centers has been of concern because criminals get involved with them. It is argued that these financial centers enable the actions of criminals. However, there have been attempts by global institutions to regulate money laundering and illegal activities.[10] Numbered bank accounts, used by Swiss banks and other offshore banks located in tax havens, have been accused by the international community of being a major instrument of the underground economy, facilitating tax evasion and money laundering.[11]AfterAl Capone's 1931 condemnation for tax evasion, according to journalistLucy Komisar: mobsterMeyer Lanskytook money fromNew Orleansslot machines and shifted it to accounts overseas. The Swiss secrecy law two years later assured him ofG-man-proof-banking.[11]Later, he bought a Swiss bank and for years deposited his Havana casino take in Miami accounts, then wired the funds to Switzerland via a network of shell andholdingcompanies and offshore accounts.[11] Economist andNobel PrizelaureateJoseph Stiglitz, told Komisar: You ask why, if there's an important role for a regulated banking system, do you allow a non-regulated banking system to continue? It's in the interest of some of the moneyed interests to allow this to occur. It's not an accident; it could have been shut down at any time. If you said the US, the UK, the majorG7banks will not deal with offshore bank centers that don't comply with G7 banks regulations, these banks could not exist. They only exist because they engage in transactions with standard banks.[11] Further research inpoliticsis needed to gain a better understanding of banking secrecy.[12]For instance, the role of economic interests, competition between financial centers, and the influence of political power on international organizations like theOECDare great places to start.
https://en.wikipedia.org/wiki/Banking_secrecy
Incryptography, azero-knowledge password proof(ZKPP) is a type ofzero-knowledge proofthat allows one party (the prover) to prove to another party (the verifier) that it knows a value of apassword, without revealing anything other than the fact that it knows the password to the verifier. The term is defined inIEEE P1363.2, in reference to one of the benefits of using apassword-authenticated key exchange(PAKE) protocol that is secure against off-line dictionary attacks. A ZKPP prevents any party from verifying guesses for the password without interacting with a party that knows it and, in the optimal case, provides exactly one guess in each interaction.[citation needed] A common use of a zero-knowledge password proof is inauthenticationsystems where one party wants to prove its identity to a second party using a password but doesn't want the second party or anybody else to learn anything about the password. For example, apps can validate a password without processing it and a payment app can check the balance of an account without touching or learning anything about the amount.[1] The first methods to demonstrate a ZKPP were theencrypted key exchangemethods (EKE) described bySteven M. Bellovinand Michael Merritt in 1992.[2]A considerable number of refinements, alternatives, and variations in the growing class of password-authenticated key agreement methods were developed in subsequent years. Standards for these methods include IETFRFC2945,IEEE P1363.2, and ISO-IEC 11770-4.[3] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Zero-knowledge_password_proof
Inmathematics, theorbit method(also known as theKirillov theory,the method of coadjoint orbitsand by a few similar names) establishes a correspondence between irreducibleunitary representationsof aLie groupand itscoadjoint orbits: orbits of theaction of the groupon the dual space of itsLie algebra. The theory was introduced byKirillov(1961,1962) fornilpotent groupsand later extended byBertram Kostant,Louis Auslander,Lajos Pukánszkyand others to the case ofsolvable groups.Roger Howefound a version of the orbit method that applies top-adic Lie groups.[1]David Voganproposed that the orbit method should serve as a unifying principle in the description of the unitary duals of real reductive Lie groups.[2] One of the key observations of Kirillov was that coadjoint orbits of a Lie groupGhave natural structure ofsymplectic manifoldswhose symplectic structure is invariant underG. If an orbit is thephase spaceof aG-invariantclassical mechanical systemthen the corresponding quantum mechanical system ought to be described via an irreducible unitary representation ofG. Geometric invariants of the orbit translate into algebraic invariants of the corresponding representation. In this way the orbit method may be viewed as a precise mathematical manifestation of a vague physical principle of quantization. In the case of a nilpotent groupGthe correspondence involves all orbits, but for a generalGadditional restrictions on the orbit are necessary (polarizability, integrality, Pukánszky condition). This point of view has been significantly advanced by Kostant in his theory ofgeometric quantizationof coadjoint orbits. For aLie groupG{\displaystyle G}, theKirillov orbit methodgives a heuristic method inrepresentation theory. It connects theFourier transformsofcoadjoint orbits, which lie in thedual spaceof theLie algebraofG, to theinfinitesimal charactersof theirreducible representations. The method got its name after theRussianmathematicianAlexandre Kirillov. At its simplest, it states that a character of a Lie group may be given by theFourier transformof theDirac delta functionsupportedon the coadjoint orbits, weighted by the square-root of theJacobianof theexponential map, denoted byj{\displaystyle j}. It does not apply to all Lie groups, but works for a number of classes ofconnectedLie groups, includingnilpotent, somesemisimplegroups, andcompact groups. LetGbe aconnected,simply connectednilpotentLie group. Kirillov proved that the equivalence classes ofirreducibleunitary representationsofGare parametrized by thecoadjoint orbitsofG, that is the orbits of the actionGon the dual spaceg∗{\displaystyle {\mathfrak {g}}^{*}}of its Lie algebra. TheKirillov character formulaexpresses theHarish-Chandra characterof the representation as a certain integral over the corresponding orbit. Complex irreducible representations ofcompact Lie groupshave been completely classified. They are always finite-dimensional, unitarizable (i.e. admit an invariant positive definiteHermitian form) and are parametrized by theirhighest weights, which are precisely the dominant integral weights for the group. IfGis a compactsemisimple Lie groupwith aCartan subalgebrahthen its coadjoint orbits areclosedand each of them intersects the positive Weyl chamberh*+in a single point. An orbit isintegralif this point belongs to the weight lattice ofG. The highest weight theory can be restated in the form of a bijection between the set of integral coadjoint orbits and the set of equivalence classes of irreducible unitary representations ofG: the highest weight representationL(λ) with highest weightλ∈h*+corresponds to the integral coadjoint orbitG·λ. TheKirillov character formulaamounts to the character formula earlier proved byHarish-Chandra.
https://en.wikipedia.org/wiki/Kirillov_orbit_theory
Surveyingorland surveyingis the technique, profession, art, and science of determining theterrestrialtwo-dimensionalorthree-dimensionalpositions ofpointsand thedistancesandanglesbetween them. These points are usually on the surface of the Earth, and they are often used to establish maps and boundaries forownership, locations, such as the designated positions of structural components for construction or the surface location of subsurface features, or other purposes required by government or civil law, such as property sales.[1] A professional in land surveying is called aland surveyor. Surveyors work with elements ofgeodesy,geometry,trigonometry,regression analysis,physics, engineering,metrology,programming languages, and the law. They use equipment, such astotal stations, robotic total stations,theodolites,GNSSreceivers,retroreflectors,3D scanners,lidarsensors, radios,inclinometer, handheld tablets, optical and digitallevels, subsurface locators, drones,GIS, and surveying software. Surveying has been an element in the development of the human environment since the beginning ofrecorded history. It is used in the planning and execution of most forms ofconstruction. It is also used in transportation, communications, mapping, and the definition of legal boundaries for land ownership. It is an important tool for research in many other scientific disciplines. TheInternational Federation of Surveyorsdefines the function of surveying as follows:[2] A surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities; Surveying has occurred since humans built the first large structures. Inancient Egypt, arope stretcherwould use simple geometry to re-establish boundaries after the annual floods of theNile River. The almost perfect squareness and north–south orientation of theGreat Pyramid of Giza, builtc.2700 BC, affirm the Egyptians' command of surveying. Thegromainstrument may have originated inMesopotamia(early 1st millennium BC).[3]The prehistoric monument atStonehenge(c.2500 BC) was set out by prehistoric surveyors using peg and rope geometry.[4] The mathematicianLiu Huidescribed ways of measuring distant objects in his workHaidao SuanjingorThe Sea Island Mathematical Manual, published in 263 AD. The Romans recognized land surveying as a profession. They established the basic measurements under which the Roman Empire was divided, such as a tax register of conquered lands (300 AD).[5]Roman surveyors were known asGromatici. In medieval Europe,beating the boundsmaintained the boundaries of a village or parish. This was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible. In England,William the Conquerorcommissioned theDomesday Bookin 1086. It recorded the names of all the land owners, the area of land they owned, the quality of the land, and specific information of the area's content and inhabitants. It did not include maps showing exact locations. Abel Foullondescribed aplane tablein 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument. Gunter's chainwas introduced in 1620 by English mathematicianEdmund Gunter. It enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes. Leonard Diggesdescribed atheodolitethat measured horizontal angles in his bookA geometric practice named Pantometria(1571). Joshua Habermel (Erasmus Habermehl) created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725.[6] In the 18th century, modern techniques and instruments for surveying began to be used.Jesse Ramsdenintroduced the first precisiontheodolitein 1787. It was an instrument for measuringanglesin the horizontal and vertical planes. He created hisgreat theodoliteusing an accuratedividing engineof his own design. Ramsden's theodolite represented a great step forward in the instrument's accuracy.William Gascoigneinvented an instrument that used atelescopewith an installedcrosshairas a target device, in 1640.James Wattdeveloped an optical meter for the measuring of distance in 1771; it measured theparallactic anglefrom which the distance to a point could be deduced. Dutch mathematicianWillebrord Snellius(a.k.a. Snel van Royen) introduced the modern systematic use oftriangulation. In 1615 he surveyed the distance fromAlkmaartoBreda, approximately 72 miles (116 km). He underestimated this distance by 3.5%. The survey was a chain of quadrangles containing 33 triangles in all. Snell showed how planar formulae could be corrected to allow for thecurvature of the Earth. He also showed how toresect, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured more accurately than bearings of the vertices, which depended on a compass. His work established the idea of surveying a primary network of control points, and locating subsidiary points inside the primary network later. Between 1733 and 1740,Jacques Cassiniand his sonCésarundertook the first triangulation of France. They included a re-surveying of themeridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. By this time triangulation methods were well established for local map-making. It was only towards the end of the 18th century that detailed triangulation network surveys mapped whole countries. In 1784, a team from GeneralWilliam Roy'sOrdnance Surveyof Great Britain began thePrincipal Triangulation of Britain. The first Ramsden theodolite was built for this survey. The survey was finally completed in 1853. TheGreat Trigonometric Surveyof India began in 1801. The Indian survey had an enormous scientific impact. It was responsible for one of the first accurate measurements of a section of an arc of longitude, and for measurements of the geodesic anomaly. It named and mappedMount Everestand the other Himalayan peaks. Surveying became a professional occupation in high demand at the turn of the 19th century with the onset of theIndustrial Revolution. The profession developed more accurate instruments to aid its work. Industrial infrastructure projects used surveyors to lay outcanals, roads and rail. In the US, theLand Ordinance of 1785created thePublic Land Survey System. It formed the basis for dividing the western territories into sections to allow the sale of land. The PLSS divided states into township grids which were further divided into sections and fractions of sections.[1] NapoleonBonaparte foundedcontinental Europe's firstcadastrein 1808. This gathered data on the number of parcels of land, their value, land usage, and names. This system soon spread around Europe. Robert Torrensintroduced theTorrens systemin South Australia in 1858. Torrens intended to simplify land transactions and provide reliable titles via a centralized register of land. The Torrens system was adopted in several other nations of the English-speaking world. Surveying became increasingly important with the arrival of railroads in the 1800s. Surveying was necessary so that railroads could plan technologically and financially viable routes. At the beginning of the century, surveyors had improved the older chains and ropes, but they still faced the problem of accurate measurement of long distances.Trevor Lloyd Wadleydeveloped theTellurometerduring the 1950s. It measures long distances using two microwave transmitter/receivers.[7]During the late 1950sGeodimeterintroducedelectronic distance measurement(EDM) equipment.[8]EDM units use a multi frequency phase shift of light waves to find a distance.[9]These instruments eliminated the need for days or weeks of chain measurement by measuring between points kilometers apart in one go. Advances in electronics allowed miniaturization of EDM. In the 1970s the first instruments combining angle and distance measurement appeared, becoming known astotal stations. Manufacturers added more equipment by degrees, bringing improvements in accuracy and speed of measurement. Major advances include tilt compensators, data recorders and on-board calculation programs. The first satellite positioning system was theUS NavyTRANSIT system. The first successful launch took place in 1960. The system's main purpose was to provide position information toPolaris missilesubmarines. Surveyors found they could use field receivers to determine the location of a point. Sparse satellite cover and large equipment made observations laborious and inaccurate. The main use was establishing benchmarks in remote locations. The US Air Force launched the first prototype satellites of theGlobal Positioning System(GPS) in 1978. GPS used a larger constellation of satellites and improved signal transmission, thus improving accuracy. Early GPS observations required several hours of observations by a static receiver to reach survey accuracy requirements. Later improvements to both satellites and receivers allowed forReal Time Kinematic(RTK) surveying. RTK surveys provide high-accuracy measurements by using a fixed base station and a second roving antenna. The position of the roving antenna can be tracked. Thetheodolite,total stationandRTKGPSsurvey remain the primary methods in use. Remote sensingand satellite imagery continue to improve and become cheaper, allowing more commonplace use. Prominent new technologies include three-dimensional (3D) scanning andlidar-based topographical surveys.UAVtechnology along withphotogrammetricimage processing is also appearing. The main surveying instruments in use around the world are thetheodolite,measuring tape,total station,3D scanners,GPS/GNSS,levelandrod. Most instruments screw onto atripodwhen in use. Tape measures are often used for measurement of smaller distances. 3D scanners and various forms of aerial imagery are also used. The theodolite is an instrument for the measurement of angles. It uses two separatecircles,protractorsoralidadesto measure angles in the horizontal and the vertical plane. A telescope mounted on trunnions is aligned vertically with the target object. The whole upper section rotates for horizontal alignment. The vertical circle measures the angle that the telescope makes against the vertical, known as the zenith angle. The horizontal circle uses an upper and lower plate. When beginning the survey, the surveyor points the instrument in a known direction (bearing), and clamps the lower plate in place. The instrument can then rotate to measure the bearing to other objects. If no bearing is known or direct angle measurement is wanted, the instrument can be set to zero during the initial sight. It will then read the angle between the initial object, the theodolite itself, and the item that the telescope aligns with. Thegyrotheodoliteis a form of theodolite that uses a gyroscope to orient itself in the absence of reference marks. It is used in underground applications. Thetotal stationis a development of the theodolite with an electronic distance measurement device (EDM). A total station can be used for leveling when set to the horizontal plane. Since their introduction, total stations have shifted from optical-mechanical to fully electronic devices.[10] Modern top-of-the-line total stations no longer need a reflector or prism to return the light pulses used for distance measurements. They are fully robotic, and can even e-mail point data to a remote computer and connect tosatellite positioning systems, such asGlobal Positioning System.Real Time KinematicGPS systems have significantly increased the speed of surveying, and they are now horizontally accurate to within 1 cm ± 1 ppm in real-time, while vertically it is currently about half of that to within 2 cm ± 2 ppm.[11] GPS surveying differs from other GPS uses in the equipment and methods used. Static GPS uses two receivers placed in position for a considerable length of time. The long span of time lets the receiver compare measurements as the satellites orbit. The changes as the satellites orbit also provide the measurement network with well conditioned geometry. This produces an accuratebaselinethat can be over 20 km long. RTK surveying uses one static antenna and one roving antenna. The static antenna tracks changes in the satellite positions and atmospheric conditions. The surveyor uses the roving antenna to measure the points needed for the survey. The two antennas use a radio link that allows the static antenna to send corrections to the roving antenna. The roving antenna then applies those corrections to the GPS signals it is receiving to calculate its own position. RTK surveying covers smaller distances than static methods. This is because divergent conditions further away from the base reduce accuracy. Surveying instruments have characteristics that make them suitable for certain uses. Theodolites and levels are often used by constructors rather than surveyors in first world countries. The constructor can perform simple survey tasks using a relatively cheap instrument. Total stations are workhorses for many professional surveyors because they are versatile and reliable in all conditions. The productivity improvements from a GPS on large scale surveys make them popular for major infrastructure or data gathering projects. One-person robotic-guided total stations allow surveyors to measure without extra workers to aim the telescope or record data. A fast but expensive way to measure large areas is with a helicopter, using a GPS to record the location of the helicopter and a laser scanner to measure the ground. To increase precision, surveyors placebeaconson the ground (about 20 km (12 mi) apart). This method reaches precisions between 5–40 cm (depending on flight height).[12] Surveyors use ancillary equipment such as tripods and instrument stands; staves and beacons used for sighting purposes;PPE; vegetation clearing equipment; digging implements for finding survey markers buried over time; hammers for placements of markers in various surfaces and structures; and portable radios for communication over long lines of sight. Land surveyors, construction professionals, geomatics engineers and civil engineers usingtotal station,GPS, 3D scanners, and other collector data use land surveying software to increase efficiency, accuracy, and productivity. Land Surveying Software is a staple of contemporary land surveying.[13] Typically, much if not all of thedraftingand some of thedesigningforplansandplatsof the surveyed property is done by the surveyor, and nearly everyone working in the area of drafting today (2021) utilizesCADsoftware and hardware both on PC, and more and more in newer generation data collectors in the field as well.[14]Other computer platforms and tools commonly used today by surveyors are offered online by theU.S. Federal Governmentand other governments' survey agencies, such as theNational Geodetic Surveyand theCORSnetwork, to get automated corrections and conversions for collectedGPSdata, and the datacoordinate systemsthemselves. Surveyors determine the position of objects by measuring angles and distances. The factors that can affect the accuracy of their observations are also measured. They then use this data to create vectors, bearings, coordinates, elevations, areas, volumes, plans and maps. Measurements are often split into horizontal and vertical components to simplify calculation. GPS and astronomic measurements also need measurement of a time component. BeforeEDM(Electronic Distance Measurement) laser devices,distanceswere measured using a variety of means. In pre-colonial America Natives would use the "bow shot" as a distance reference ("as far as an arrow can slung out of a bow", or "flights of a Cherokee long bow").[15]Europeans used chains with links of a known length such as aGunter's chain, or measuring tapes made of steel orinvar. To measure horizontal distances, these chains or tapes were pulled taut to reduce sagging and slack. The distance had to be adjusted for heat expansion. Attempts to hold the measuring instrument level would also be made. When measuring up a slope, the surveyor might have to "break" (break chain) the measurement- use an increment less than the total length of the chain.Perambulators, or measuring wheels, were used to measure longer distances but not to a high level of accuracy.Tacheometryis the science of measuring distances by measuring the angle between two ends of an object with a known size. It was sometimes used before to the invention of EDM where rough ground made chain measurement impractical. Historically, horizontal angles were measured by using acompassto provide a magnetic bearing or azimuth. Later, more precise scribed discs improved angular resolution. Mounting telescopes withreticlesatop the disc allowed more precise sighting (seetheodolite). Levels and calibrated circles allowed the measurement of vertical angles.Verniersallowed measurement to a fraction of a degree, such as with a turn-of-the-centurytransit. Theplane tableprovided a graphical method of recording and measuring angles, which reduced the amount of mathematics required. In 1829Francis Ronaldsinvented areflecting instrumentfor recording angles graphically by modifying theoctant.[16] By observing the bearing from every vertex in a figure, a surveyor can measure around the figure. The final observation will be between the two points first observed, except with a 180° difference. This is called aclose. If the first and last bearings are different, this shows the error in the survey, called theangular misclose. The surveyor can use this information to prove that the work meets the expected standards. The simplest method for measuring height is with analtimeterusing air pressure to find the height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With theGlobal Positioning System(GPS), elevation can be measured with satellite receivers. Usually, GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances. When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed.Turningis a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun, which is why this method is referred to asdifferential levelling. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet (7.6 m) high, allowing the level to be set much higher than the base of the rod. The primary way of determining one's position on the Earth's surface when no known positions are nearby is by astronomic observations. Observations to the Sun, Moon and stars could all be made using navigational techniques. Once the instrument's position and bearing to a star is determined, the bearing can be transferred to a reference point on Earth. The point can then be used as a base for further observations. Survey-accurate astronomic positions were difficult to observe and calculate and so tended to be a base off which many other measurements were made. Since the advent of the GPS system, astronomic observations are rare as GPS allows adequate positions to be determined over most of the surface of the Earth. Few survey positions are derived from the first principles. Instead, most surveys points are measured relative to previously measured points. This forms a reference orcontrolnetwork where each point can be used by a surveyor to determine their own position when beginning a new survey. Survey points are usually marked on the earth's surface by objects ranging from small nails driven into the ground tolarge beaconsthat can be seen from long distances. The surveyors can set up their instruments in this position and measure to nearby objects. Sometimes a tall, distinctive feature such as a steeple or radio aerial has its position calculated as a reference point that angles can be measured against. Triangulationis a method of horizontal location favoured in the days before EDM and GPS measurement. It can determine distances, elevations and directions between distant objects. Since the early days of surveying, this was the primary method of determining accurate positions of objects fortopographicmaps of large areas. A surveyor first needs to know the horizontal distance between two of the objects, known as thebaseline. Then the heights, distances and angular position of other objects can be derived, as long as they are visible from one of the original objects. High-accuracy transits or theodolites were used, and angle measurements were repeated for increased accuracy. See alsoTriangulation in three dimensions. Offsettingis an alternate method of determining the position of objects, and was often used to measure imprecise features such as riverbanks. The surveyor would mark and measure two known positions on the ground roughly parallel to the feature, and mark out a baseline between them. At regular intervals, a distance was measured at right angles from the first line to the feature. The measurements could then be plotted on a plan or map, and the points at the ends of the offset lines could be joined to show the feature. Traversingis a common method of surveying smaller areas. The surveyors start from an old reference mark or known position and place a network of reference marks covering the survey area. They then measure bearings and distances between the reference marks, and to the target features. Most traverses form a loop pattern or link between two prior reference marks so the surveyor can check their measurements. Many surveys do not calculate positions on the surface of the Earth, but instead, measure the relative positions of objects. However, often the surveyed items need to be compared to outside data, such as boundary lines or previous survey's objects. The oldest way of describing a position is via latitude and longitude, and often a height above sea level. As the surveying profession grew it created Cartesian coordinate systems to simplify the mathematics for surveys over small parts of the Earth. The simplest coordinate systems assume that the Earth is flat and measure from an arbitrary point, known as a 'datum' (singular form of data). The coordinate system allows easy calculation of the distances and direction between objects over small areas. Large areas distort due to the Earth's curvature. North is often defined as true north at the datum. For larger regions, it is necessary to model the shape of the Earth using an ellipsoid or a geoid. Many countries have created coordinate-grids customized to lessen error in their area of the Earth. A basic tenet of surveying is that no measurement is perfect, and that there will always be a small amount of error.[17]There are three classes of survey errors: Surveyors avoid these errors by calibrating their equipment, using consistent methods, and by good design of their reference network. Repeated measurements can be averaged and any outlier measurements discarded. Independent checks like measuring a point from two or more locations or using two different methods are used, and errors can be detected by comparing the results of two or more measurements, thus utilizingredundancy. Once the surveyor has calculated the level of the errors in his or her work, it isadjusted. This is the process of distributing the error between all measurements. Each observation is weighted according to how much of the total error it is likely to have caused and part of that error is allocated to it in a proportional way. The most common methods of adjustment are theBowditchmethod, also known as the compass rule, and theprinciple of least squaresmethod. The surveyor must be able to distinguish betweenaccuracy and precision. In the United States, surveyors and civil engineers use units of feet wherein a survey foot breaks down into 10ths and 100ths. Many deed descriptions containing distances are often expressed using these units (125.25 ft). On the subject of accuracy, surveyors are often held to a standard of one one-hundredth of a foot; about 1/8 inch. Calculation and mapping tolerances are much smaller wherein achieving near-perfect closures are desired. Though tolerances will vary from project to project, in the field and day to day usage beyond a 100th of a foot is often impractical. Local organisations or regulatory bodies class specializations of surveying in different ways. Broad groups are: Based on the considerations and true shape of the Earth, surveying is broadly classified into two types. Plane surveyingassumes the Earth is flat. Curvature and spheroidal shape of the Earth is neglected. In this type of surveying all triangles formed by joining survey lines are considered as plane triangles. It is employed for small survey works where errors due to the Earth's shape are too small to matter.[18] Ingeodetic surveyingthe curvature of the Earth is taken into account while calculating reduced levels, angles, bearings and distances. This type of surveying is usually employed for large survey works. Survey works up to 100 square miles (260 square kilometers ) are treated as plane and beyond that are treated as geodetic.[19]In geodetic surveying necessary corrections are applied to reduced levels, bearings and other observations.[18] The basic principles of surveying have changed little over the ages, but the tools used by surveyors have evolved. Engineering, especially civil engineering, often needs surveyors. Surveyors help determine the placement of roads, railways, reservoirs, dams,pipelines,retaining walls, bridges, and buildings. They establish the boundaries of legal descriptions and political divisions. They also provide advice and data forgeographical information systems(GIS) that record land features and boundaries. Surveyors must have a thorough knowledge ofalgebra, basiccalculus,geometry, andtrigonometry. They must also know the laws that deal with surveys,real property, and contracts. Most jurisdictions recognize three different levels of qualification: Related professions includecartographers,hydrographers,geodesists,photogrammetrists, andtopographers, as well ascivil engineersandgeomatics engineers. Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Prospective surveyors usually have to receive a degree in surveying, followed by a detailed examination of their knowledge of surveying law and principles specific to the region they wish to practice in, and undergo a period of on-the-job training or portfolio building before they are awarded a license to practise. Licensed surveyors usually receive apost nominal, which varies depending on where they qualified. The system has replaced older apprenticeship systems. A licensed land surveyor is generally required to sign and seal all plans. The state dictates the format, showing their name and registration number. In many jurisdictions, surveyors must mark their registration number onsurvey monumentswhen setting boundary corners. Monuments take the form of capped iron rods, concrete monuments, or nails with washers. Most countries' governments regulate at least some forms of surveying. Their survey agencies establish regulations and standards. Standards control accuracy, surveying credentials, monumentation of boundaries and maintenance ofgeodetic networks. Many nations devolve this authority to regional entities or states/provinces. Cadastral surveys tend to be the most regulated because of the permanence of the work. Lot boundaries established by cadastral surveys may stand for hundreds of years without modification. Most jurisdictions also have a form of professional institution representing local surveyors. These institutes often endorse or license potential surveyors, as well as set and enforce ethical standards. The largest institution is theInternational Federation of Surveyors(Abbreviated FIG, for French:Fédération Internationale des Géomètres). They represent the survey industry worldwide. Most English-speaking countries consider building surveying a distinct profession. They have their own professional associations and licensing requirements. A building surveyor can provide technical building advice on existing buildings, new buildings, design, compliance with regulations such as planning and building control. A building surveyor normally acts on behalf of his or her client ensuring that their vested interests remain protected. The Royal Institution of Chartered Surveyors (RICS) is a world-recognised governing body for those working within the built environment.[20] One of the primary roles of the land surveyor is to determine the boundary of real property on the ground. The surveyor must determine where the adjoining landowners wish to put the boundary. The boundary is established in legal documents and plans prepared by attorneys, engineers, and land surveyors. The surveyor then puts monuments on the corners of the new boundary. They might also find or resurvey the corners of the property monumented by prior surveys. Cadastral land surveyorsare licensed by governments. The cadastral survey branch of theBureau of Land Management(BLM) conducts most cadastral surveys in the United States.[21]They consult withForest Service,National Park Service,Army Corps of Engineers,Bureau of Indian Affairs,Fish and Wildlife Service,Bureau of Reclamation, and others. The BLM used to be known as theUnited States General Land Office(GLO). In states organized per thePublic Land Survey System(PLSS), surveyors must carry out BLM cadastral surveys under that system. Cadastral surveyors often have to work around changes to the earth that obliterate or damage boundary monuments. When this happens, they must consider evidence that is not recorded on the title deed. This is known as extrinsic evidence.[22] Quantity surveying is a profession that deals with thecosts and contractsof construction projects. A quantity surveyor is an expert in estimating thecosts of materials, labor, and timeneeded for a project, as well as managing thefinancial and legal aspectsof the project. A quantity surveyor can work for either the client or the contractor, and can be involved in different stages of the project, from planning to completion. Quantity surveyors are also known asChartered Surveyorsin the UK. Some U.S. Presidents were land surveyors.George WashingtonandAbraham Lincolnsurveyedcolonial or frontier territories early in their career, prior to serving in office. Ferdinand Rudolph Hassleris considered the "father" of geodetic surveying in the U.S.[23] David T. Abercrombiepracticed land surveying before starting anoutfitterstore ofexcursiongoods. The business would later turn intoAbercrombie & Fitchlifestyle clothing store. Percy Harrison Fawcettwas a British surveyor that explored the jungles of South America attempting to find theLost City of Z. His biography and expeditions were recounted in the bookThe Lost City of Zand were later adapted onfilm screen. Inō Tadatakaproduced the first map of Japan using modern surveying techniques starting in 1800, at the age of 55.
https://en.wikipedia.org/wiki/Surveying
Instatisticsandmachine learning, thehierarchical Dirichlet process(HDP) is anonparametricBayesianapproach to clusteringgrouped data.[1][2]It uses aDirichlet processfor each group of data, with the Dirichlet processes for all groups sharing a base distribution which is itself drawn from a Dirichlet process. This method allows groups to share statistical strength via sharing of clusters across groups. The base distribution being drawn from a Dirichlet process is important, because draws from a Dirichlet process are atomic probability measures, and the atoms will appear in all group-level Dirichlet processes. Since each atom corresponds to a cluster, clusters are shared across all groups. It was developed byYee Whye Teh,Michael I. Jordan,Matthew J. BealandDavid Bleiand published in theJournal of the American Statistical Associationin 2006,[1]as a formalization and generalization of theinfinite hidden Markov modelpublished in 2002.[3] This model description is sourced from.[1]The HDP is a model for grouped data. What this means is that the data items come in multiple distinct groups. For example, in atopic modelwords are organized into documents, with each document formed by a bag (group) of words (data items). Indexing groups byj=1,...J{\displaystyle j=1,...J}, suppose each group consist of data itemsxj1,...xjn{\displaystyle x_{j1},...x_{jn}}. The HDP is parameterized by a base distributionH{\displaystyle H}that governs the a priori distribution over data items, and a number of concentration parameters that govern the a priori number of clusters and amount of sharing across groups. Thej{\displaystyle j}th group is associated with a random probability measureGj{\displaystyle G_{j}}which has distribution given by a Dirichlet process: whereαj{\displaystyle \alpha _{j}}is the concentration parameter associated with the group, andG0{\displaystyle G_{0}}is the base distribution shared across all groups. In turn, the common base distribution is Dirichlet process distributed: with concentration parameterα0{\displaystyle \alpha _{0}}and base distributionH{\displaystyle H}. Finally, to relate the Dirichlet processes back with the observed data, each data itemxji{\displaystyle x_{ji}}is associated with a latent parameterθji{\displaystyle \theta _{ji}}: The first line states that each parameter has a prior distribution given byGj{\displaystyle G_{j}}, while the second line states that each data item has a distributionF(θji){\displaystyle F(\theta _{ji})}parameterized by its associated parameter. The resulting model above is called a HDP mixture model, with the HDP referring to the hierarchically linked set of Dirichlet processes, and the mixture model referring to the way the Dirichlet processes are related to the data items. To understand how the HDP implements a clustering model, and how clusters become shared across groups, recall that draws from aDirichlet processare atomic probability measures with probability one. This means that the common base distributionG0{\displaystyle G_{0}}has a form which can be written as: where there are an infinite number of atoms,θk∗,k=1,2,...{\displaystyle \theta _{k}^{*},k=1,2,...}, assuming that the overall base distributionH{\displaystyle H}has infinite support. Each atom is associated with a massπ0k{\displaystyle \pi _{0k}}. The masses have to sum to one sinceG0{\displaystyle G_{0}}is a probability measure. SinceG0{\displaystyle G_{0}}is itself the base distribution for the group specific Dirichlet processes, eachGj{\displaystyle G_{j}}will have atoms given by the atoms ofG0{\displaystyle G_{0}}, and can itself be written in the form: Thus the set of atoms is shared across all groups, with each group having its own group-specific atom masses. Relating this representation back to the observed data, we see that each data item is described by a mixture model: where the atomsθk∗{\displaystyle \theta _{k}^{*}}play the role of the mixture component parameters, while the massesπjk{\displaystyle \pi _{jk}}play the role of the mixing proportions. In conclusion, each group of data is modeled using a mixture model, with mixture components shared across all groups but mixing proportions being group-specific. In clustering terms, we can interpret each mixture component as modeling a cluster of data items, with clusters shared across all groups, and each group, having its own mixing proportions, composed of different combinations of clusters. The HDP mixture model is a natural nonparametric generalization ofLatent Dirichlet allocation, where the number of topics can be unbounded and learnt from data.[1]Here each group is a document consisting of a bag of words, each cluster is a topic, and each document is a mixture of topics. The HDP is also a core component of theinfinite hidden Markov model,[3]which is a nonparametric generalization of thehidden Markov modelallowing the number of states to be unbounded and learnt from data.[1][4] The HDP can be generalized in a number of directions. The Dirichlet processes can be replaced byPitman-Yor processesandGamma processes, resulting in theHierarchical Pitman-Yor processand Hierarchical Gamma process. The hierarchy can be deeper, with multiple levels of groups arranged in a hierarchy. Such an arrangement has been exploited in thesequence memoizer, a Bayesian nonparametric model for sequences which has a multi-level hierarchy of Pitman-Yor processes. In addition, Bayesian Multi-Domain Learning (BMDL) model derives domain-dependent latent representations of overdispersed count data based on hierarchical negative binomial factorization for accurate cancer subtyping even if the number of samples for a specific cancer type is small.[5]
https://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process
Aratio distribution(also known as aquotient distribution) is aprobability distributionconstructed as the distribution of theratioofrandom variableshaving two other known distributions. Given two (usuallyindependent) random variablesXandY, the distribution of the random variableZthat is formed as the ratioZ=X/Yis aratio distribution. An example is theCauchy distribution(also called thenormal ratio distribution), which comes about as the ratio of twonormally distributedvariables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: thet-distributionarises from aGaussianrandom variable divided by an independentchi-distributedrandom variable, while theF-distributionoriginates from the ratio of two independentchi-squared distributedrandom variables. More general ratio distributions have been considered in the literature.[1][2][3][4][5][6][7][8][9] Often the ratio distributions areheavy-tailed, and it may be difficult to work with such distributions and develop an associatedstatistical test. A method based on themedianhas been suggested as a "work-around".[10] The ratio is one type of algebra for random variables: Related to the ratio distribution are theproduct distribution,sum distributionanddifference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described inMelvin D. Springer's book from 1979The Algebra of Random Variables.[8] The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product isC = ABand a ratio isD=C/Ait does not necessarily mean that the distributions ofDandBare the same. Indeed, a peculiar effect is seen for theCauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.[8]This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables,C1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}each constructed from two Gaussian distributionsC1=G1/G2{\displaystyle C_{1}=G_{1}/G_{2}}andC2=G3/G4{\displaystyle C_{2}=G_{3}/G_{4}}then whereFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle C_3 = G_4/G_3}. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions. A way of deriving the ratio distribution ofZ=X/Y{\displaystyle Z=X/Y}from the joint distribution of the two other random variablesX , Y, with joint pdfpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}, is by integration of the following form[3] If the two variables are independent thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle p_{XY}(x,y) = p_X(x) p_Y(y) }and this becomes This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is DefiningZ=X/Y{\displaystyle Z=X/Y}we have Using the known definite integral∫0∞xexp⁡(−cx2)dx=12c{\textstyle \int _{0}^{\infty }\,x\,\exp \left(-cx^{2}\right)\,dx={\frac {1}{2c}}}we get which is the Cauchy distribution, or Student'stdistribution withn= 1 TheMellin transformhas also been suggested for derivation of ratio distributions.[8] In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distributionfx,y(x,y)=fx(x)fy(y){\displaystyle f_{x,y}(x,y)=f_{x}(x)f_{y}(y)}which has support in the positive quadrantx,y>0{\displaystyle x,y>0}and we wish to find the pdf of the ratioR=X/Y{\displaystyle R=X/Y}. The hatched volume above the liney=x/R{\displaystyle y=x/R}represents the cumulative distribution of the functionfx,y(x,y){\displaystyle f_{x,y}(x,y)}multiplied with the logical functionX/Y≤R{\displaystyle X/Y\leq R}. The density is first integrated in horizontal strips; the horizontal strip at heightyextends fromx= 0 tox = Ryand has incremental probabilityfy(y)dy∫0Ryfx(x)dx{\textstyle f_{y}(y)dy\int _{0}^{Ry}f_{x}(x)\,dx}.Secondly, integrating the horizontal strips upward over allyyields the volume of probability above the line Finally, differentiateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle F_R(R)}with respect toR{\displaystyle R}to get the pdffR(R){\displaystyle f_{R}(R)}. Move the differentiation inside the integral: and since then As an example, find the pdf of the ratioRwhen We have thus Differentiation wrt.Ryields the pdf ofR FromMellin transformtheory, for distributions existing only on the positive half-linex≥0{\displaystyle x\geq 0}, we have the product identityE⁡[(UV)p]=E⁡[Up]E⁡[Vp]{\displaystyle \operatorname {E} [(UV)^{p}]=\operatorname {E} [U^{p}]\;\;\operatorname {E} [V^{p}]}providedU,V{\displaystyle U,\;V}are independent. For the case of a ratio of samples likeE⁡[(X/Y)p]{\displaystyle \operatorname {E} [(X/Y)^{p}]}, in order to make use of this identity it is necessary to use moments of the inverse distribution. SetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle 1/Y = Z }such thatE⁡[(XZ)p]=E⁡[Xp]E⁡[Y−p]{\displaystyle \operatorname {E} [(XZ)^{p}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{-p}]}. Thus, if the moments ofXp{\displaystyle X^{p}}andY−p{\displaystyle Y^{-p}}can be determined separately, then the moments ofX/Y{\displaystyle X/Y}can be found. The moments ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y^{-p} }are determined from the inverse pdf ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}, often a tractable exercise. At simplest,E⁡[Y−p]=∫0∞y−pfy(y)dy{\textstyle \operatorname {E} [Y^{-p}]=\int _{0}^{\infty }y^{-p}f_{y}(y)\,dy}. To illustrate, letX{\displaystyle X}be sampled from a standardGamma distribution Z=Y−1{\displaystyle Z=Y^{-1}}is sampled from an inverse Gamma distribution with parameterβ{\displaystyle \beta }and has pdfΓ−1(β)z−(1+β)e−1/z{\displaystyle \;\Gamma ^{-1}(\beta )z^{-(1+\beta )}e^{-1/z}}. The moments of this pdf are Multiplying the corresponding moments gives Independently, it is known that the ratio of the two Gamma samplesR=X/Y{\displaystyle R=X/Y}follows the Beta Prime distribution: SubstitutingB(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}we haveE⁡[Rp]=Γ(α+p)Γ(β−p)Γ(α+β)/Γ(α)Γ(β)Γ(α+β)=Γ(α+p)Γ(β−p)Γ(α)Γ(β){\displaystyle \operatorname {E} [R^{p}]={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha +\beta )}}{\Bigg /}{\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha )\Gamma (\beta )}}}which is consistent with the product of moments above. In theProduct distributionsection, and derived fromMellin transformtheory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have which, in terms of probability distributions, is equivalent to Note thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \operatorname{E}(1/Y) \neq \frac{1}{\operatorname{E}(Y)} }i.e.,∫−∞∞y−1fy(y)dy≠1∫−∞∞yfy(y)dy{\displaystyle \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy\neq {\frac {1}{\int _{-\infty }^{\infty }yf_{y}(y)\,dy}}} The variance of a ratio of independent variables is WhenXandYare independent and have aGaussian distributionwith zero mean, the form of their ratio distribution is aCauchy distribution. This can be derived by settingZ=X/Y=tan⁡θ{\displaystyle Z=X/Y=\tan \theta }then showing thatθ{\displaystyle \theta }has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have Ifp(x,y){\displaystyle p(x,y)}is a function only ofrthenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta }is uniformly distributed onFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle [0, 2\pi ] }with density1/2π{\displaystyle 1/2\pi }so the problem reduces to finding the probability distribution ofZunder the mapping We have, by conservation of probability and sincedz/dθ=1/cos2⁡θ{\displaystyle dz/d\theta =1/\cos ^{2}\theta } and settingcos2⁡θ=11+(tan⁡θ)2=11+z2{\textstyle \cos ^{2}\theta ={\frac {1}{1+(\tan \theta )^{2}}}={\frac {1}{1+z^{2}}}}we get There is a spurious factor of 2 here. Actually, two values ofθ{\displaystyle \theta }spaced byπ{\displaystyle \pi }map onto the same value ofz, the density is doubled, and the final result is When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented byDavid Hinkley.[6]The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Studenttin which the density depends only on radiusr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}}. It does not extend to the ratio of two independent Studenttdistributions which give the Cauchy ratio shown in a section below for one degree of freedom. In the absence of correlation(cor⁡(X,Y)=0){\displaystyle (\operatorname {cor} (X,Y)=0)}, theprobability density functionof the ratioZ=X/Yof two normal variablesX=N(μX,σX2) andY=N(μY,σY2) is given exactly by the following expression, derived in several sources:[6] where The above expression becomes more complicated when the variablesXandYare correlated. Ifμx=μy=0{\displaystyle \mu _{x}=\mu _{y}=0}butFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \sigma_X \neq \sigma_Y}andρ≠0{\displaystyle \rho \neq 0}the more general Cauchy distribution is obtained whereρis thecorrelation coefficientbetweenXandYand The complex distribution has also been expressed with Kummer'sconfluent hypergeometric functionor theHermite function.[9] This was shown in Springer 1979 problem 4.28. A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be Take logs to get Sinceloge⁡(1+δ)=δ−δ22+δ33+⋯{\displaystyle \log _{e}(1+\delta )=\delta -{\frac {\delta ^{2}}{2}}+{\frac {\delta ^{3}}{3}}+\cdots }then asymptotically Alternatively, Geary (1930) suggested that has approximately astandard Gaussian distribution:[1]This transformation has been called theGeary–Hinkley transformation;[7]the approximation is good ifYis unlikely to assume negative values, basicallyμy>3σy{\displaystyle \mu _{y}>3\sigma _{y}}. This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratioz{\displaystyle z}could be transformed into a near-Gaussian form and developed an approximation fort{\displaystyle t}dependent on the probability of negative denominator valuesx+μx<0{\displaystyle x+\mu _{x}<0}being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: in whichx,y{\displaystyle x,y}are zero-mean correlated normal variables with variancesσx2,σy2{\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}}andX,Y{\displaystyle X,Y}have meansμx,μy.{\displaystyle \mu _{x},\mu _{y}.}Writex′=x−ρyσx/σy{\displaystyle x'=x-\rho y\sigma _{x}/\sigma _{y}}such thatx′,y{\displaystyle x',y}become uncorrelated andx′{\displaystyle x'}has standard deviation The ratio: is invariant under this transformation and retains the same pdf. They{\displaystyle y}term in the numerator appears to be made separable by expanding: to get in whichμx′=μx−ρμyσxσy{\textstyle \mu '_{x}=\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}}andzhas now become a ratio of uncorrelated non-central normal samples with an invariantz-offset (this is not formally proven, though appears to have been used by Geary), Finally, to be explicit, the pdf of the ratioz{\displaystyle z}for correlated variables is found by inputting the modified parametersσx′,μx′,σy,μy{\displaystyle \sigma _{x}',\mu _{x}',\sigma _{y},\mu _{y}}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \rho'=0 }into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset−ρσxσy{\displaystyle -\rho {\frac {\sigma _{x}}{\sigma _{y}}}}onz{\displaystyle z}. The figures above show an example of a positively correlated ratio withσx=σy=1,μx=0,μy=0.5,ρ=0.975{\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975}in which the shaded wedges represent the increment of area selected by given ratiox/y∈[r,r+δ]{\displaystyle x/y\in [r,r+\delta ]}which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratioz=x/y≈1{\displaystyle z=x/y\approx 1}the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdfpZ(x/y){\displaystyle p_{Z}(x/y)}. Conversely asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x/y}moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability. The ratio of correlated zero-mean circularly symmetriccomplex normal distributedvariables was determined by Baxley et al.[13]and has since been extended to the nonzero-mean and nonsymmetric case.[14]In the correlated zero-mean case, the joint distribution ofx,yis where (⋅)H{\displaystyle (\cdot )^{H}}is an Hermitian transpose and The PDF ofZ=X/Y{\displaystyle Z=X/Y}is found to be In the usual event thatσx=σy{\displaystyle \sigma _{x}=\sigma _{y}}we get Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient ofρ=0.7exp⁡(iπ/4){\displaystyle \rho =0.7\exp(i\pi /4)}. The pdf peak occurs at roughly the complex conjugate of a scaled downρ{\displaystyle \rho }. The ratio of independent or correlated log-normals is log-normal. This follows, because ifX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}arelog-normally distributed, thenln⁡(X1){\displaystyle \ln(X_{1})}andln⁡(X2){\displaystyle \ln(X_{2})}are normally distributed. If they are independent or their logarithms follow abivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.[note 1] This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution ofX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}is adequately approximated by a log-normal. This is a common result of themultiplicative central limit theorem, also known asGibrat's law, whenXi{\displaystyle X_{i}}is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.[15] With two independent random variables following auniform distribution, e.g., the ratio distribution becomes If two independent random variables,XandYeach follow aCauchy distributionwith median equal to zero and shape factora{\displaystyle a} then the ratio distribution for the random variableZ=X/Y{\displaystyle Z=X/Y}is[16] This distribution does not depend ona{\displaystyle a}and the result stated by Springer[8](p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as theproduct distributionof the random variableW=XY{\displaystyle W=XY}: More generally, if two independent random variablesXandYeach follow aCauchy distributionwith median equal to zero and shape factorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle a}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}respectively, then: The result for the ratio distribution can be obtained from the product distribution by replacingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}with1b.{\displaystyle {\frac {1}{b}}.} IfXhas a standard normal distribution andYhas a standard uniform distribution, thenZ=X/Yhas a distribution known as theslash distribution, with probability density function where φ(z) is the probability density function of the standard normal distribution.[17] LetGbe a normal(0,1) distribution,YandZbechi-squared distributionswithmandndegrees of freedomrespectively, all independent, withfχ(x,k)=xk2−1e−x/22k/2Γ(k/2){\displaystyle f_{\chi }(x,k)={\frac {x^{{\frac {k}{2}}-1}e^{-x/2}}{2^{k/2}\Gamma (k/2)}}}. Then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V_1 \sim {\chi'}_{k_1}^2(\lambda)}, anoncentral chi-squared distribution, andV2∼χ′k22(0){\displaystyle V_{2}\sim {\chi '}_{k_{2}}^{2}(0)}andV1{\displaystyle V_{1}}is independent ofV2{\displaystyle V_{2}}then mnFm,n′=β′(m2,n2)orFm,n′=β′(m2,n2,1,nm){\displaystyle {\frac {m}{n}}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}}){\text{ or }}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})}definesFm,n′{\displaystyle F'_{m,n}}, Fisher's F density distribution, the PDF of the ratio of two Chi-squares withm, ndegrees of freedom. The CDF of the Fisher density, found inF-tables is defined in thebeta prime distributionarticle. If we enter anF-test table withm= 3,n= 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral Forgamma distributionsUandVwith arbitraryshape parametersα1andα2and their scale parameters both set to unity, that is,U∼Γ(α1,1),V∼Γ(α2,1){\displaystyle U\sim \Gamma (\alpha _{1},1),V\sim \Gamma (\alpha _{2},1)}, whereΓ(x;α,1)=xα−1e−xΓ(α){\displaystyle \Gamma (x;\alpha ,1)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}}, then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma (x;\alpha,1) }, thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta U \sim \Gamma (x;\alpha,\theta) = \frac { x^{\alpha-1} e^{- \frac{x}{\theta}}}{ \theta^k \Gamma(\alpha)} }. Note that hereθis ascale parameter, rather than a rate parameter. IfU∼Γ(α1,θ1),V∼Γ(α2,θ2){\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),\;V\sim \Gamma (\alpha _{2},\theta _{2})}, then by rescaling theθ{\displaystyle \theta }parameter to unity we have Thus in whichFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \beta'(\alpha,\beta,p,q)}represents thegeneralisedbeta prime distribution. In the foregoing it is apparent that ifX∼β′(α1,α2,1,1)≡β′(α1,α2){\displaystyle X\sim \beta '(\alpha _{1},\alpha _{2},1,1)\equiv \beta '(\alpha _{1},\alpha _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta X \sim \beta'( \alpha_1, \alpha_2, 1, \theta ) }. More explicitly, since ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1, \theta_1 ), V \sim \Gamma(\alpha_2, \theta_2 ) }then where IfX,Yare independent samples from theRayleigh distributionFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_r(r) = (r/\sigma^2) e^ {-r^2/2\sigma^2}, \;\; r \ge 0 }, the ratioZ=X/Yfollows the distribution[18] and has cdf The Rayleigh distribution has scaling as its only parameter. The distribution ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Z = \alpha X/Y }follows and has cdf Thegeneralized gamma distributionis which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that hereais ascale parameter, rather than a rate parameter;dis a shape parameter. In the ratios above, Gamma samples,U,Vmay have differing sample sizesα1,α2{\displaystyle \alpha _{1},\alpha _{2}}but must be drawn from the same distributionxα−1e−xθθkΓ(α){\displaystyle {\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}}with equal scalingθ{\displaystyle \theta }. In situations whereUandVare differently scaled, a variables transformation allows the modified random ratio pdf to be determined. LetX=UU+V=11+B{\displaystyle X={\frac {U}{U+V}}={\frac {1}{1+B}}}whereU∼Γ(α1,θ),V∼Γ(α2,θ),θ{\displaystyle U\sim \Gamma (\alpha _{1},\theta ),V\sim \Gamma (\alpha _{2},\theta ),\theta }arbitrary and, from above,X∼Beta(α1,α2),B=V/U∼Beta′(α2,α1){\displaystyle X\sim Beta(\alpha _{1},\alpha _{2}),B=V/U\sim Beta'(\alpha _{2},\alpha _{1})}. RescaleVarbitrarily, definingY∼UU+φV=11+φB,0≤φ≤∞{\displaystyle Y\sim {\frac {U}{U+\varphi V}}={\frac {1}{1+\varphi B}},\;\;0\leq \varphi \leq \infty } We haveFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle B = \frac{1-X}{X} }and substitution intoYgivesY=Xφ+(1−φ)X,dY/dX=φ(φ+(1−φ)X)2{\displaystyle Y={\frac {X}{\varphi +(1-\varphi )X}},dY/dX={\frac {\varphi }{(\varphi +(1-\varphi )X)^{2}}}} TransformingXtoYgivesFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y) = \frac{f_X (X) } {| dY/dX|} = \frac {\beta(X,\alpha_1,\alpha_2)}{ \varphi / [\varphi + (1-\varphi) X]^2} } NotingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle X = \frac {\varphi Y}{ 1-(1 - \varphi)Y} }we finally have Thus, ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1,\theta_1) }andV∼Γ(α2,θ2){\displaystyle V\sim \Gamma (\alpha _{2},\theta _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y = \frac {U} { U + V} }is distributed asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y, \varphi) }withFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \varphi = \frac {\theta_2}{\theta_1} } The distribution ofYis limited here to the interval [0,1]. It can be generalized by scaling such that ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y \sim f_Y(Y,\varphi) }then wherefY(Y,φ,Θ)=φ/Θ[1−(1−φ)Y/Θ]2β(φY/Θ1−(1−φ)Y/Θ,α1,α2),0≤Y≤Θ{\displaystyle f_{Y}(Y,\varphi ,\Theta )={\frac {\varphi /\Theta }{[1-(1-\varphi )Y/\Theta ]^{2}}}\beta \left({\frac {\varphi Y/\Theta }{1-(1-\varphi )Y/\Theta }},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq \Theta } Though not ratio distributions of two variables, the following identities for one variable are useful: combining the latter two equations yields Corollary IfU∼Γ(α,1),V∼Γ(β,1){\displaystyle U\sim \Gamma (\alpha ,1),V\sim \Gamma (\beta ,1)}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \frac{U}{V} \sim \beta' ( \alpha, \beta )}and Further results can be found in theInverse distributionarticle. This result was derived by Katz et al.[20] SupposeX∼Binomial(n,p1){\displaystyle X\sim {\text{Binomial}}(n,p_{1})}andY∼Binomial(m,p2){\displaystyle Y\sim {\text{Binomial}}(m,p_{2})}andX{\displaystyle X},Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}are independent. LetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle T=\frac{X/n}{Y/m}}. Thenlog⁡(T){\displaystyle \log(T)}is approximately normally distributed with meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \log(p_1/p_2)}and variance(1/p1)−1n+(1/p2)−1m{\displaystyle {\frac {(1/p_{1})-1}{n}}+{\frac {(1/p_{2})-1}{m}}}. The binomial ratio distribution is of significance in clinical trials: if the distribution ofTis known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.[citation needed] In the ratio of Poisson variablesR = X/Ythere is a problem thatYis zero with finite probability soRis undefined. To counter this, consider the truncated, or censored, ratioR' = X/Y'where zero sample ofYare discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample beinge−λ{\displaystyle e^{-\lambda }}, the generic pdf of a left truncated Poisson distribution is which sums to unity. Following Cohen,[21]fornindependent trials, the multidimensional truncated pdf is and the log likelihood becomes On differentiation we get and setting to zero gives the maximum likelihood estimateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} } Note that asλ^→0{\displaystyle {\hat {\lambda }}\to 0}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x \to 1 }so the truncated maximum likelihoodλ{\displaystyle \lambda }estimate, though correct for both truncated and untruncated distributions, gives a truncated meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }value which is highly biassed relative to the untruncated one. Nevertheless it appears thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }is asufficient statisticforFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }sinceFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} }depends on the data only through the sample meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x = \frac{1}{n} \sum_{i=1}^n x_i }in the previous equation which is consistent with the methodology of the conventionalPoisson distribution. Absent any closed form solutions, the following approximate reversion for truncatedFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }is valid over the whole range0≤λ≤∞;1≤x¯≤∞{\displaystyle 0\leq \lambda \leq \infty ;\;1\leq {\bar {x}}\leq \infty }. which compares with the non-truncated version which is simplyFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda = \bar x }. Taking the ratioFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle R = \hat \lambda_X / \hat \lambda_Y }is a valid operation even thoughFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_X }may use a non-truncated model whileFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_Y }has a left-truncated one. The asymptotic large-Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle n\lambda \text{ variance of }\hat \lambda }(andCramér–Rao bound) is in which substitutingLgives Then substitutingx¯{\displaystyle {\bar {x}}}from the equation above, we get Cohen's variance estimate The variance of the point estimate of the meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }, on the basis ofntrials, decreases asymptotically to zero asnincreases to infinity. For smallλ{\displaystyle \lambda }it diverges from the truncated pdf variance in Springael[22]for example, who quotes a variance of fornsamples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbb {Var} ( \hat \lambda) / \mathbb {Var} ( \lambda) }, ranges from 1 for largeFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }(100% efficient) up to 2 asλ{\displaystyle \lambda }approaches zero (50% efficient). These mean and variance parameter estimates, together with parallel estimates forX, can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning[23]and there is aZero-truncated Poisson distributionWikipedia entry. This distribution is the ratio of twoLaplace distributions.[24]LetXandYbe standard Laplace identically distributed random variables and letz=X/Y. Then the probability distribution ofzis Let the mean of theXandYbea. Then the standard double Lomax distribution is symmetric arounda. This distribution has an infinite mean and variance. IfZhas a standard double Lomax distribution, then 1/Zalso has a standard double Lomax distribution. The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution. For 0 <a< 1, thea-th moment exists. where Γ is thegamma function. Ratio distributions also appear inmultivariate analysis.[25]If the random matricesXandYfollow aWishart distributionthen the ratio of thedeterminants is proportional to the product of independentFrandom variables. In the case whereXandYare from independent standardizedWishart distributionsthen the ratio has aWilks' lambda distribution. In relation to Wishart matrix distributions ifS∼Wp(Σ,ν+1){\displaystyle S\sim W_{p}(\Sigma ,\nu +1)}is a sample Wishart matrix and vectorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V }is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead[26]states The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence ofCochran's theorem. Similarly which is Theorem 3.2.12 of Muirhead[26]
https://en.wikipedia.org/wiki/Ratio_distribution#Poisson_and_truncated_Poisson_distributions
ACP-131[1]is the controlling publication for the listing ofQ codesandZ codes.It is published and revised from time to time by theCombined Communications Electronics Board(CCEB) countries: Australia, New Zealand, Canada, United Kingdom, and United States. When the meanings of the codes contained in ACP-131 are translated into various languages, the codes provide a means of communicating between ships of various nations, such as during aNATOexercise, where there is no commonlanguage. The original edition of ACP-131 was published by the U.S. military during the early years[when?]ofradiotelegraphyfor use byradio operatorsusingMorse Codeoncontinuous wave(CW) telegraphy. It became especially useful, and even essential, towirelessradio operators on both military and civilian ships at sea before the development of advancedsingle-sidebandtelephonyin the 1960s. Radio communications, prior to the advent oflandlinesandsatellitesas communication paths and relays, was always subject to unpredictable fade outs caused byweatherconditions, practical limits on available emission power at thetransmitter,radio frequencyof the transmission, type of emission, type of transmittingantenna,signalwaveform characteristics, modulation scheme in use, sensitivity of thereceiverand presence, or lack of presence, of atmospheric reflective layers above the earth, such as theE-layerandF-layers, the type of receiving antenna, the time of day, and numerous other factors. Because of these factors which often resulted in limiting periods of transmission time on certain frequencies to only several hours a day, or only several minutes, it was found necessary to keep each wireless transmission as short as possible and to still get the message through. This was particularly true of CW radio circuits shared by a number of operators, with some waiting their turn to transmit. As a result, an operator communicating byradio telegraphyto another operator, wanting to know how the other operator was receiving the signal, could send out a message on his key inMorse Codestating, "How are you receiving me?" Using ACP-131 codes, the question could be phrased simply "INTQRK" resulting in much more efficient use of circuit time. If the receiver hears the sender in a "loud and clear" condition, the response would be "QRK 5X5": All of which requires less circuit time and less "pounding" on the key by the sending operators. Should the receiving operator not understand the sending operator, the receiving operator would send "?" or the marginally shorterINT The other operator would respond again with: which is much easier than retransmitting "How are you receiving me?" If the receiving operator understood the sending operator, the receiving operator would say the word "ROGER" or "MESSAGE RECEIVED", or the send the short form "R" "R" and "?" are similarly structured, but very easy to distinguish. According to ACP-125(F), paragraphs 103 and 104, in radio communication among Allied military units: Some assert that the use of Q codes and Z codes was not intended for use on voice circuits, where plain language was speedy and easily recognizable, especially when employing the character recognition system in use at the time, such asALPHA,BRAVO,CHARLIE,etc. However, in military communication the latter are still in use.[2] A typicalsimplexmilitary voice exchange: However, some voice operators, such asamateur radiooperators, find it convenient or traditional to carry over some of the Q codes to voice ("phone") exchanges, such as "QSL", "QRK", "QTH",etc.
https://en.wikipedia.org/wiki/ACP_131
The followingoutlineis provided as an overview of and topical guide to software development: Software development– development of asoftwareproduct, which entailscomputer programming(process of writing and maintaining thesource code), and encompasses a planned and structured process from the conception of the desired software to its final manifestation.[1]Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[2] Software development can be described as all of the following: While theinformation technology(IT) industry undergoes changes faster than any other field, most technical experts agree that one must have a community to consult, learn from, or share experiences with. Here is a list of well-known software development organizations.
https://en.wikipedia.org/wiki/Outline_of_software_development
Ininformation technology, benchmarking ofcomputer securityrequires measurements for comparing both different IT systems and single IT systems in dedicated situations. The technical approach is a pre-defined catalog of security events (security incident andvulnerability) together with corresponding formula for the calculation of security indicators that are accepted and comprehensive. Information security indicatorshave been standardized by theETSIIndustrial Specification Group (ISG) ISI. These indicators provide the basis to switch from a qualitative to a quantitative culture in IT Security Scope of measurements: External and internal threats (attempt and success), user's deviant behaviours, nonconformities and/or vulnerabilities (software, configuration, behavioural, general security framework). In 2019 the ISG ISI terminated and related standards will be maintained via the ETSI TC CYBER. The list of Information Security Indicators belongs to the ISI framework that consists of the following eight closely linked Work Items: Preliminary work on information security indicators have been done by the French Club R2GS. The first public set of the ISI standards (security indicators list and event model) have been released in April 2013.
https://en.wikipedia.org/wiki/Information_security_indicators
Inlinguistics, anonce word—also called anoccasionalism—is any word (lexeme), or any sequence ofsoundsorletters, created for a single occasion or utterance but not otherwise understood or recognized as a word in a given language.[1][2]Nonce words have a variety of functions and are most commonly used for humor, poetry, children's literature, linguistic experiments, psychological studies, and medical diagnoses, or they arise by accident. Some nonce words have a meaning at their inception or gradually acquire a fixed meaning inferred from context and use, but if they eventually become an established part of the language (neologisms), they stop being nonce words.[3]Other nonce words may be essentially meaningless and disposable (nonsense words), but they are useful for exactly that reason—the wordswugandblicketfor instance were invented by researchers to be used in child language testing.[4]Nonsense words often shareorthographicandphoneticsimilarity with (meaningful) words,[5]as is the case withpseudowords, which make no sense but can still be pronounced in accordance with a language'sphonotactic rules.[6]Such invented words are used by psychology and linguistics researchers and educators as tools to assess a learner's phonetic decoding ability, and the ability to infer the (hypothetical) meaning of a nonsense word from context is used to test forbrain damage.[7]Proper namesof real or fictional entities sometimes originate as nonce words. The term is used because such a word is created "for the nonce" (i.e., for the time being, or this once),[2]: 455coming fromJames Murray, editor of theOxford English Dictionary.[8]: 25Some analyses consider nonce words to fall broadly underneologisms, which are usually defined as words relatively recently accepted into a language's vocabulary;[9]other analyses do not.[3] A variety of more specific concepts used by scholars falls under the umbrella ofnonce words, of which overlap is also sometimes possible: Many types of other words can also be meaningful nonce words, as is true of mostsniglets(words, often stunt words, explicitly coined in the absence of any relevant dictionary word). Other types of misinterpretations or humorous re-wordings can also be nonce words, as may occur inword play, such as certain examples ofpuns,spoonerisms,malapropisms, etc. Furthermore, meaningless nonce words can occur unintentionally or spontaneously, for instance througherrors(typographicalor otherwise) or throughkeysmashes. Nonce words are sometimes used to study thedevelopment of languagein children, because they allow researchers to test how children treat words of which they have no prior knowledge. This permits inferences about the default assumptions children make about new word meanings, syntactic structure, etc. "Wug" is among the earliest known nonce words used in language learning studies, and is best known for its use inJean Berko's "Wug test", in which children were presented with a novel object, called a wug, and then shown multiple instances of the object and asked to complete a sentence that elicits a plural form—e.g., "This is a wug. Now there are two of them. There are two...?" The use of the plural form "wugs" by the children suggests that they have applied a plural rule to the form, and that this knowledge is not specific to prior experience with the word but applies to most English nouns, whether familiar or novel.[12] Nancy N. Soja,Susan Carey, andElizabeth Spelkeused "blicket", "stad", "mell", "coodle", "doff", "tannin", "fitch", and "tulver" as nonce words when testing to see if children's knowledge of the distinction between non-solid substances and solid objects preceded or followed their knowledge of the distinction betweenmass nounsandcount nouns.[13] A poem bySeamus Heaneytitled "Nonce Words" is included in his collectionDistrict and Circle.[14]David Crystalreportedfluddle, which he understood to mean a water spillage between a puddle and a flood, invented by the speaker because no suitable word existed. Crystal speculated in 1995 that it might enter the English language if it proved popular.[2]Boubaandkikiare used to demonstrate a connection between the sound of a word and its meaning.Grok, coined byRobert HeinleininStranger in a Strange Land, is now used by many to mean "deeply and intuitively understand".[15]The poem "Jabberwocky" is full of nonce words, of which two,chortleandgalumph, have entered into common use.[15]The novelFinnegans Wakeusedquark("three quarks for Muster Mark") as a nonce word; the physicistMurray Gell-Mannadopted it as the name of asubatomic particle.[16]
https://en.wikipedia.org/wiki/Nonce_word
Cross-application scripting(CAS) is a vulnerability affecting desktop applications that don't check input in an exhaustive way. CAS allows an attacker to insert data that modifies the behaviour of a particular desktop application. This makes it possible to extract data from inside of the users' systems. Attackers may gain the full privileges of the attacked application when exploiting CAS vulnerabilities; the attack is to some degree independent of the underlying operating system and hardware architecture. Initially discovered by Emanuele Gentili and presented with two other researchers (Alessandro Scoscia and Emanuele Acri) that had participated in the study of the technique and its implications, it was presented for the first time during the Security Summit 2010 inMilan.[1][2][3] Theformat string attackis very similar in concept to this attack and CAS could be considered as a generalization of this attack method. Some aspects of this technique have been previously demonstrated inclickjackingtechniques. Like web interfaces, modern frameworks for the realization of graphical applications (in particularGTK+andQt) allow the use of tags inside their ownwidgets. If an attacker gains the possibility to inject tags, he gains the ability to manipulate the appearance and behaviour of the application. Exactly the same phenomenon was seen with the use ofcross-site scripting(XSS) in web pages, which is why this kind of behavior has been named cross-application scripting (CAS). Typically desktop applications get a considerable amount of input and support a large number of features, certainly more than any web interface. This makes it harder for the developer to check whether all the input a program might get from untrusted sources is filtered correctly. If cross-application scripting is the application equivalent for XSS in web applications, then cross-application request forgery (CARF) is the equivalent ofcross-site request forgery(CSRF) in desktop applications. In CARF the concept of “link” and “protocol” inherited from the web has been extended because it involves components of the graphical environment and, in some cases, of the operating system. Exploiting vulnerabilities amenable to CSRF requires interaction from the user. This requirement isn't particularly limiting because the user can be easily led to execute certain actions if the graphical interface is altered the right way. Many misleading changes in the look of applications can be obtained with the use of CAS: a new kind of “phishing”, whose dangerousness is amplified by a lack of tools to detect this kind of attack outside of websites or emails. In contrast to XSS techniques, that can manipulate and later execute commands in the users' browser, with CAS it is possible to talk directly to the operating system, and not just its graphical interface.
https://en.wikipedia.org/wiki/Cross-application_scripting
Principal component analysis(PCA) is alineardimensionality reductiontechnique with applications inexploratory data analysis, visualization anddata preprocessing. The data islinearly transformedonto a newcoordinate systemsuch that the directions (principal components) capturing the largest variation in the data can be easily identified. Theprincipal componentsof a collection of points in areal coordinate spaceare a sequence ofp{\displaystyle p}unit vectors, where thei{\displaystyle i}-th vector is the direction of a line that best fits the data while beingorthogonalto the firsti−1{\displaystyle i-1}vectors. Here, a best-fitting line is defined as one that minimizes the average squaredperpendiculardistance from the points to the line. These directions (i.e., principal components) constitute anorthonormal basisin which different individual dimensions of the data arelinearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.[1] Principal component analysis has applications in many fields such aspopulation genetics,microbiomestudies, andatmospheric science.[2] When performing PCA, the first principal component of a set ofp{\displaystyle p}variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed throughp{\displaystyle p}iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to anindependent set. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. Thei{\displaystyle i}-th principal component can be taken as a direction orthogonal to the firsti−1{\displaystyle i-1}principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components areeigenvectorsof the data'scovariance matrix. Thus, the principal components are often computed byeigendecompositionof the data covariance matrix orsingular value decompositionof the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related tofactor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related tocanonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe thecross-covariancebetween two datasets while PCA defines a neworthogonal coordinate systemthat optimally describes variance in a single dataset.[3][4][5][6]RobustandL1-norm-based variants of standard PCA have also been proposed.[7][8][9][6] PCA was invented in 1901 byKarl Pearson,[10]as an analogue of theprincipal axis theoremin mechanics; it was later independently developed and named byHarold Hotellingin the 1930s.[11]Depending on the field of application, it is also named the discreteKarhunen–Loèvetransform (KLT) insignal processing, theHotellingtransform in multivariate quality control,proper orthogonal decomposition(POD) in mechanical engineering,singular value decomposition(SVD) ofX(invented in the last quarter of the 19th century[12]),eigenvalue decomposition(EVD) ofXTXin linear algebra,factor analysis(for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe'sPrincipal Component Analysis),[13]Eckart–Young theorem(Harman, 1960), orempirical orthogonal functions(EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988),spectral decompositionin noise and vibration, andempirical modal analysisin structural dynamics. PCA can be thought of as fitting ap-dimensionalellipsoidto the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small. To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute thecovariance matrixof the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we mustnormalizeeach of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. Biplotsandscree plots(degree ofexplained variance) are used to interpret findings of the PCA. PCA is defined as anorthogonallinear transformationon a realinner product spacethat transforms the data to a newcoordinate systemsuch that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[13] Consider ann×p{\displaystyle n\times p}datamatrix,X, with column-wise zeroempirical mean(the sample mean of each column has been shifted to zero), where each of thenrows represents a different repetition of the experiment, and each of thepcolumns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of sizel{\displaystyle l}ofp-dimensional vectors of weights or coefficientsw(k)=(w1,…,wp)(k){\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}that map each row vectorx(i)=(x1,…,xp)(i){\displaystyle \mathbf {x} _{(i)}=(x_{1},\dots ,x_{p})_{(i)}}ofXto a new vector of principal componentscorest(i)=(t1,…,tl)(i){\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}, given by in such a way that the individual variablest1,…,tl{\displaystyle t_{1},\dots ,t_{l}}oftconsidered over the data set successively inherit the maximum possible variance fromX, with each coefficient vectorwconstrained to be aunit vector(wherel{\displaystyle l}is usually selected to be strictly less thanp{\displaystyle p}to reduce dimensionality). The above may equivalently be written in matrix form as whereTik=tk(i){\displaystyle {\mathbf {T} }_{ik}={t_{k}}_{(i)}},Xij=xj(i){\displaystyle {\mathbf {X} }_{ij}={x_{j}}_{(i)}}, andWjk=wj(k){\displaystyle {\mathbf {W} }_{jk}={w_{j}}_{(k)}}. In order to maximize variance, the first weight vectorw(1)thus has to satisfy Equivalently, writing this in matrix form gives Sincew(1)has been defined to be a unit vector, it equivalently also satisfies The quantity to be maximised can be recognised as aRayleigh quotient. A standard result for apositive semidefinite matrixsuch asXTXis that the quotient's maximum possible value is the largesteigenvalueof the matrix, which occurs whenwis the correspondingeigenvector. Withw(1)found, the first principal component of a data vectorx(i)can then be given as a scoret1(i)=x(i)⋅w(1)in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)⋅w(1)}w(1). Thek-th component can be found by subtracting the firstk− 1 principal components fromX: and then finding the weight vector which extracts the maximum variance from this new data matrix It turns out that this gives the remaining eigenvectors ofXTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors ofXTX. Thek-th principal component of a data vectorx(i)can therefore be given as a scoretk(i)=x(i)⋅w(k)in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)⋅w(k)}w(k), wherew(k)is thekth eigenvector ofXTX. The full principal components decomposition ofXcan therefore be given as whereWis ap-by-pmatrix of weights whose columns are the eigenvectors ofXTX. The transpose ofWis sometimes called thewhitening or sphering transformation. Columns ofWmultiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are calledloadingsin PCA or in Factor analysis. XTXitself can be recognized as proportional to the empirical samplecovariance matrixof the datasetXT.[13]: 30–31 The sample covarianceQbetween two of the different principal components over the dataset is given by: where the eigenvalue property ofw(k)has been used to move from line 2 to line 3. However eigenvectorsw(j)andw(k)corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written The empirical covariance matrix between the principal components becomes whereΛis the diagonal matrix of eigenvaluesλ(k)ofXTX.λ(k)is equal to the sum of the squares over the dataset associated with each componentk, that is,λ(k)= Σitk2(i)= Σi(x(i)⋅w(k))2. The transformationP=XWmaps a data vectorx(i)from an original space ofxvariables to a new space ofpvariables which are uncorrelated over the dataset. To non-dimensionalize the centered data, letXcrepresent the characteristic values of data vectorsXi, given by: for a dataset of sizen. These norms are used to transform the original space of variablesx, yto a new space of uncorrelated variablesp, q(givenYcwith same meaning), such thatpi=XiXc,qi=YiYc{\displaystyle p_{i}={\frac {X_{i}}{X_{c}}},\quad q_{i}={\frac {Y_{i}}{Y_{c}}}}; and the new variables are linearly related as:q=αp{\displaystyle q=\alpha p}. To find the optimal linear relationship, we minimize the total squared reconstruction error:E(α)=11−α2∑i=1n(αpi−qi)2{\displaystyle E(\alpha )={\frac {1}{1-\alpha ^{2}}}\sum _{i=1}^{n}(\alpha p_{i}-q_{i})^{2}}; such that setting the derivative of the error function to zero(E′(α)=0){\displaystyle (E'(\alpha )=0)}yields:α=12(−λ±λ2+4){\displaystyle \alpha ={\frac {1}{2}}\left(-\lambda \pm {\sqrt {\lambda ^{2}+4}}\right)}whereλ=p⋅p−q⋅qp⋅q{\displaystyle \lambda ={\frac {p\cdot p-q\cdot q}{p\cdot q}}}.[14] Suchdimensionality reductioncan be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selectingL= 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data containsclustersthese too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, inregression analysis, the larger the number ofexplanatory variablesallowed, the greater is the chance ofoverfittingthe model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method calledprincipal component regression. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns ofTwill also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrixW, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a highersignal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested usingparametric bootstrap, as an aid in determining how many principal components to retain.[15] The principal components transformation can also be associated with another matrix factorization, thesingular value decomposition(SVD) ofX, HereΣis ann-by-prectangular diagonal matrixof positive numbersσ(k), called the singular values ofX;Uis ann-by-nmatrix, the columns of which are orthogonal unit vectors of lengthncalled the left singular vectors ofX; andWis ap-by-pmatrix whose columns are orthogonal unit vectors of lengthpand called the right singular vectors ofX. In terms of this factorization, the matrixXTXcan be written whereΣ^{\displaystyle \mathbf {\hat {\Sigma }} }is the square diagonal matrix with the singular values ofXand the excess zeros chopped off that satisfiesΣ^2=ΣTΣ{\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }. Comparison with the eigenvector factorization ofXTXestablishes that the right singular vectorsWofXare equivalent to the eigenvectors ofXTX, while the singular valuesσ(k)ofX{\displaystyle \mathbf {X} }are equal to the square-root of the eigenvaluesλ(k)ofXTX. Using the singular value decomposition the score matrixTcan be written so each column ofTis given by one of the left singular vectors ofXmultiplied by the corresponding singular value. This form is also thepolar decompositionofT. Efficient algorithms exist to calculate the SVD ofXwithout having to form the matrixXTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,[16]unless only a handful of components are required. As with the eigen-decomposition, a truncatedn×Lscore matrixTLcan be obtained by considering only the first L largest singular values and their singular vectors: The truncation of a matrixMorTusing a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix ofrankLto the original matrix, in the sense of the difference between the two having the smallest possibleFrobenius norm, a result known as theEckart–Young theorem[1936]. Theorem (Optimal k‑dimensional fit).Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and letP=UΣVT{\displaystyle P=U\,\Sigma \,V^{T}}be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense isPk=UkΣkVkT{\displaystyle P_{k}=U_{k}\,\Sigma _{k}\,V_{k}^{T}}, where Vkconsists of the first k columns of V. Moreover, the relative residual variance isR(k)=∑j=k+1mσj2∑j=1mσj2{\displaystyle R(k)={\frac {\sum _{j=k+1}^{m}\sigma _{j}^{2}}{\sum _{j=1}^{m}\sigma _{j}^{2}}}}. [14] The singular values (inΣ) are the square roots of theeigenvaluesof the matrixXTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (seebelow). PCA is often used in this manner fordimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to thediscrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT".Nonlinear dimensionality reductiontechniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.[14] LetXc{\displaystyle \mathbf {X} _{\text{c}}}be the *centered* data matrix (nrows,pcolumns) and define the covarianceΣ=1nXcTXc.{\displaystyle \Sigma ={\frac {1}{n}}\,\mathbf {X} _{\text{c}}^{\mathsf {T}}\mathbf {X} _{\text{c}}.}If thej{\displaystyle j}‑th variable is multiplied by a factorαj{\displaystyle \alpha _{j}}we obtainXc(α)=XcD,D=diag⁡(α1,…,αp).{\displaystyle \mathbf {X} _{\text{c}}^{(\alpha )}=\mathbf {X} _{\text{c}}D,\qquad D=\operatorname {diag} (\alpha _{1},\ldots ,\alpha _{p}).}Hence the new covariance isΣ(α)=DTΣD.{\displaystyle \Sigma ^{(\alpha )}=D^{\mathsf {T}}\,\Sigma \,D.} Because the eigenvalues and eigenvectors ofΣ(α){\displaystyle \Sigma ^{(\alpha )}}are those ofΣ{\displaystyle \Sigma }scaled byD{\displaystyle D}, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates. If we have just two variables and they have the samesample varianceand are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.[14] Write each observation asqi=μ+zi,μ=1n∑i=1nqi.{\displaystyle \mathbf {q} _{i}={\boldsymbol {\mu }}+\mathbf {z} _{i},\qquad {\boldsymbol {\mu }}={\tfrac {1}{n}}\sum _{i=1}^{n}\mathbf {q} _{i}.} Without subtractingμ{\displaystyle {\boldsymbol {\mu }}}we are in effect diagonalising Σunc=nμμT+1nZTZ,{\displaystyle \Sigma _{\text{unc}}\;=\;n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}\;+\;{\tfrac {1}{n}}\,\mathbf {Z} ^{\mathsf {T}}\mathbf {Z} ,} whereZ{\displaystyle \mathbf {Z} }is the centered matrix. The rank‑one termnμμT{\displaystyle n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}}often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred partZ{\displaystyle \mathbf {Z} }. After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance. Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name:Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on"Mean-centering in Moderated Regression: Much Ado About Nothing". Sincecovariances are correlations of normalized variables(Z- or standard-scores) a PCA based on the correlation matrix ofXisequalto a PCA based on the covariance matrix ofZ, the standardized version ofX. PCA is a popular primary technique inpattern recognition. It is not, however, optimized for class separability.[17]However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[18]Thelinear discriminant analysisis an alternative which is optimized for class separability. Some properties of PCA include:[13][page needed] The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements ofx, and they may also be useful inregression, in selecting a subset of variables fromx, and in outlier detection. Before we look at its usage, we first look atdiagonalelements, Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements ofxinto decreasing contributions due to each PC, but we can also decompose the wholecovariance matrixinto contributionsλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}from each PC. Although not strictly decreasing, the elements ofλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}will tend to become smaller ask{\displaystyle k}increases, asλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}is nonincreasing for increasingk{\displaystyle k}, whereas the elements ofαk{\displaystyle \alpha _{k}}tend to stay about the same size because of the normalization constraints:αk′αk=1,k=1,…,p{\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}. As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[19] The applicability of PCA as described above is limited by certain (tacit) assumptions[20]made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (seekernel PCA). Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[21]and forward modeling has to be performed to recover the true magnitude of the signals.[22]As an alternative method,non-negative matrix factorizationfocusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.[23][24][25]See more atthe relation between PCA and non-negative matrix factorization. PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[26] PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[27][page needed]Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[28]The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[28] Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models. Under the assumption that that is, that the data vectorx{\displaystyle \mathbf {x} }is the sum of the desired information-bearing signals{\displaystyle \mathbf {s} }and a noise signaln{\displaystyle \mathbf {n} }one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view. In particular, Linsker showed that ifs{\displaystyle \mathbf {s} }is Gaussian andn{\displaystyle \mathbf {n} }is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes themutual informationI(y;s){\displaystyle I(\mathbf {y} ;\mathbf {s} )}between the desired informations{\displaystyle \mathbf {s} }and the dimensionality-reduced outputy=WLTx{\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }.[29] If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vectorn{\displaystyle \mathbf {n} }areiid), but the information-bearing signals{\displaystyle \mathbf {s} }is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on theinformation loss, which is defined as[30][31] The optimality of PCA is also preserved if the noisen{\displaystyle \mathbf {n} }is iid and at least more Gaussian (in terms of theKullback–Leibler divergence) than the information-bearing signals{\displaystyle \mathbf {s} }.[32]In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noisen{\displaystyle \mathbf {n} }becomes dependent. The following is a detailed description of PCA using the covariance method[33]as opposed to the correlation method.[34] The goal is to transform a given data setXof dimensionpto an alternative data setYof smaller dimensionL. Equivalently, we are seeking to find the matrixY, whereYis theKarhunen–Loèvetransform (KLT) of matrixX: Y=KLT{X}{\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}} Suppose you have data comprising a set of observations ofpvariables, and you want to reduce the data so that each observation can be described with onlyLvariables,L<p. Suppose further, that the data are arranged as a set ofndata vectorsx1…xn{\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}with eachxi{\displaystyle \mathbf {x} _{i}}representing a single grouped observation of thepvariables. Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[35]Hence we proceed by centering the data as follows: In some applications, each variable (column ofB) may also be scaled to have a variance equal to 1 (seeZ-score).[36]This step affects the calculated principal components, but makes them independent of the units used to measure the different variables. LetXbe ad-dimensional random vector expressed as column vector. Without loss of generality, assumeXhas zero mean. We want to find(∗){\displaystyle (\ast )}ad×dorthonormal transformation matrixPso thatPXhas a diagonal covariance matrix (that is,PXis a random vector with all its distinct components pairwise uncorrelated). A quick computation assumingP{\displaystyle P}were unitary yields: Hence(∗){\displaystyle (\ast )}holds if and only ifcov⁡(X){\displaystyle \operatorname {cov} (X)}were diagonalisable byP{\displaystyle P}. This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. In practical implementations, especially withhigh dimensional data(largep), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids thenp2operations of explicitly calculating and storing the covariance matrixXTX, instead utilizing one ofmatrix-free methods, for example, based on the function evaluating the productXT(X r)at the cost of2npoperations. One way to compute the first principal component efficiently[41]is shown in the following pseudo-code, for a data matrixXwith zero mean, without ever computing its covariance matrix. Thispower iterationalgorithm simply calculates the vectorXT(X r), normalizes, and places the result back inr. The eigenvalue is approximated byrT(XTX) r, which is theRayleigh quotienton the unit vectorrfor the covariance matrixXTX. If the largest singular value is well separated from the next largest one, the vectorrgets close to the first principal component ofXwithin the number of iterationsc, which is small relative top, at the total cost2cnp. Thepower iterationconvergence can be accelerated without noticeably sacrificing the small cost per iteration using more advancedmatrix-free methods, such as theLanczos algorithmor the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectorsrandswith block-vectors, matricesRandS. Every column ofRapproximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the productXT(X R). Implemented, for example, inLOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-levelBLASmatrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. Non-linear iterative partial least squares (NIPALS)is a variant the classicalpower iterationwith matrix deflation by subtraction implemented for computing the first few components in a principal component orpartial least squaresanalysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example,genomics,metabolomics) it is usually only necessary to compute the first few PCs. Thenon-linear iterative partial least squares(NIPALS) algorithm updates iterative approximations to the leading scores and loadingst1andr1Tby thepower iterationmultiplying on every iteration byXon the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations toXTX, based on the function evaluating the productXT(X r)=((X r)TX)T. The matrix deflation by subtraction is performed by subtracting the outer product,t1r1TfromXleaving the deflated residual matrix used to calculate the subsequent leading PCs.[42]For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precisionround-off errorsaccumulated in each iteration and matrix deflation by subtraction.[43]AGram–Schmidtre-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[44]NIPALS reliance on single-vector multiplications cannot take advantage of high-levelBLASand results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[45] In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variablespecies. For this, the following results are produced. These results are what is calledintroducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013). Few software offer this option in an "automatic" way. This is the case ofSPADthat historically, following the work ofLudovic Lebart, was the first to propose this option, and the R packageFactoMineR. The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as theIntelligence Quotient(IQ). The pioneering statistical psychologistSpearmanactually developed factor analysis in 1904 for histwo-factor theoryof intelligence, adding a formal technique to the science ofpsychometrics. In 1924Thurstonelooked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[46] In 1949, Shevky and Williams introduced the theory offactorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[47]Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms. One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[48] About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[49] PCA can be used as a formal method for the development of indexes. As an alternativeconfirmatory composite analysishas been proposed to develop and assess indexes.[50] The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city. The country-levelHuman Development Index(HDI) fromUNDP, which has been published since 1990 and is very extensively used in development studies,[51]has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA. In 1978Cavalli-Sforzaand others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events. Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[52] PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologistEran Elhaikpublished a theoretical paper inScientific Reportsanalyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking andcircular reasoning.[53] Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[54] PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[55] Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[56] Inquantitative finance, PCA is used[57]infinancial risk management, and has been applied toother problemssuch asportfolio optimization. PCA is commonly used in problems involvingfixed incomesecurities andportfolios, andinterest rate derivatives. Valuations here depend on the entireyield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,[58]thereby facilitating the modelling. One common risk management application is tocalculating value at risk, VaR, applying PCA to theMonte Carlo simulation.[59]Here, for each simulation-sample, the components are stressed, and rates, andin turn option values, are then reconstructed; with VaR calculated, finally, over the entire run. PCA is also used inhedgingexposure tointerest rate risk, givenpartial durationsand other sensitivities.[58]Under both, the first three, typically, principal components of the system are of interest (representing"shift", "twist", and "curvature"). These principal components are derived from an eigen-decomposition of thecovariance matrixofyieldat predefined maturities;[60]and where thevarianceof each component is itseigenvalue(and as the components areorthogonal, no correlation need be incorporated in subsequent modelling). Forequity, an optimal portfolio is one where theexpected returnis maximized for a given level of risk, or alternatively, where risk is minimized for a given return; seeMarkowitz modelfor discussion. Thus, one approach is to reduce portfolio risk, whereallocation strategiesare applied to the "principal portfolios" instead of the underlyingstocks. A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.[61][62]PCA has also been used to understand relationships[57]between internationalequity markets, and within markets between groups of companies in industries orsectors. PCA may also be applied tostress testing,[63]essentially an analysis of a bank's ability to endurea hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several]macroeconomic variablesinto a more manageable data set, which can then [be used] for analysis."[63]Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor'seigenvector– and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks. A variant of principal components analysis is used inneuroscienceto identify the specific properties of a stimulus that increases aneuron's probability of generating anaction potential.[64][65]This technique is known asspike-triggered covariance analysis. In a typical application an experimenter presents awhite noiseprocess as a stimulus (usually either as a sensory input to a test subject, or as acurrentinjected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates thecovariance matrixof thespike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. Theeigenvectorsof the difference between the spike-triggered covariance matrix and the covariance matrix of theprior stimulus ensemble(the set of all stimuli, defined over the same length time window) then indicate the directions in thespaceof stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential.Spike sortingis an important procedure becauseextracellularrecording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performsclustering analysisto associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is,order parameters, duringphase transitionsin the brain.[66] Correspondence analysis(CA) was developed byJean-Paul Benzécri[67]and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied tocontingency tables. CA decomposes thechi-squared statisticassociated to this table into orthogonal factors.[68]Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not. Several variants of CA are available includingdetrended correspondence analysisandcanonical correspondence analysis. One special extension ismultiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[69] Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors. Factor analysisis similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[70]In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[13]: 158Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) orcausal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[71] It has been asserted that the relaxed solution ofk-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[72][73]However, that PCA is a useful relaxation ofk-means clustering was not a new result,[74]and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[75] Non-negative matrix factorization(NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[23][24][25]in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis. In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[21]For NMF, its components are ranked based only on the empirical FRV curves.[25]The residual fractional eigenvalue plots, that is,1−∑i=1kλi/∑j=1nλj{\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}as a function of component numberk{\displaystyle k}given a total ofn{\displaystyle n}components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise).[21]The FRV curves for NMF is decreasing continuously[25]when the NMF components are constructedsequentially,[24]indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[25]indicating the less over-fitting property of NMF. It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane. Theiconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables. The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation). A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable". A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables.Sparse PCAovercomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables. Several approaches have been proposed, including The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[82] Most of the modern methods fornonlinear dimensionality reductionfind their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points.Trevor Hastieexpanded on this concept by proposingPrincipalcurves[86]as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for dataapproximationfollowed byprojectingthe points onto it. See also theelastic mapalgorithm andprincipal geodesic analysis.[87]Another popular generalization iskernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. Inmultilinear subspace learning,[88][89][90]PCA is generalized tomultilinear PCA(MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. N-way principal component analysis may be performed with models such asTucker decomposition,PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive tooutliersin the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.[91]For example, indata miningalgorithms likecorrelation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA[92]based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[7][5] Robust principal component analysis(RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[93][94][95] Independent component analysis(ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations. Given a matrixE{\displaystyle E}, it tries to decompose it into two matrices such thatE=AP{\displaystyle E=AP}. A key difference from techniques such as PCA and ICA is that some of the entries ofA{\displaystyle A}are constrained to be 0. HereP{\displaystyle P}is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied : then the decomposition is unique up to multiplication by a scalar.[96] Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[97]In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA). A DAPC can be realized on R using the package Adegenet. (more info:adegenet on the web) Directional component analysis(DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.[98]Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets. Also like PCA, it is based on a covariance matrix derived from the input dataset. The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact. Whereas PCA maximises explained variance, DCA maximises probability density given impact. The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact). DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles ,[99]and the most likely and most impactful changes in rainfall due to climate change .[100]
https://en.wikipedia.org/wiki/Principal_components_analysis
Visual inspectionis a common method ofquality control,data acquisition, anddata analysis. Visual Inspection, used in maintenance of facilities, mean inspection of equipment and structures using either or all of raw human senses such as vision, hearing, touch and smell and/or any non-specialized inspection equipment. Inspections requiring Ultrasonic, X-Ray equipment,Infrared, etc. are not typically regarded as visual inspection as these Inspection methodologies require specialized equipment, training and certification. A study of the visual inspection of smallintegrated circuitsfound that the modal duration of eye fixations of trained inspectors was about 200 ms. The most accurate inspectors made the fewest eye fixations and were the fastest. When the same chip was judged more than once by an individual inspector the consistency of judgment was very high whereas the consistency between inspectors was somewhat less. Variation by a factor of six in inspection speed led to variation of less than a factor of two in inspection accuracy. Visual inspection had afalse positiverate of 2% and afalse negativerate of 23%.[1] To do aneyeball searchis to look for something specific in a mass ofcodeor data withone's own eyes, as opposed to using some sort ofpattern matchingsoftware likegrepor any other automated search tool. Also known asvgreporogrep, i.e., "visual/optical grep".[2]See alsovdiff. "Eyeballing" is the most common and readily available method of initial data assessment.[3]This method is effective for identifying patterns or anomalies in complex data but can be time-intensive and error-prone.[4]Although low-cost and adaptable, its efficiency andROIoften fall short compared to automated tools, which offer greater scalability and consistency.[5]However, switching from manual visual inspection to automated methods depends on the task's complexity, scale, and the balance between upfront costs and long-term efficiency.[6] Experts inpattern recognitionmaintain that the "eyeball" technique is still the most effective procedure for searching arbitrary, possibly unknown structures in data.[7] In the military, applying this sort of search to real-world terrain is often referred to as "using theMark I Eyeball" device (pronounced as Mark One Eyeball), the U.S. military adopting it in 1950s.[8]The term is an allusion on military nomenclature, "Mark I" being the first version of a military vehicle or weapon.
https://en.wikipedia.org/wiki/Visual_inspection
Inmathematics, aplane curveis acurvein aplanethat may be aEuclidean plane, anaffine planeor aprojective plane. The most frequently studied cases are smooth plane curves (includingpiecewisesmooth plane curves), andalgebraic plane curves. Plane curves also include theJordan curves(curves that enclose a region of the plane but need not be smooth) and thegraphs of continuous functions. A plane curve can often be represented inCartesian coordinatesby animplicit equationof the formf(x,y)=0{\displaystyle f(x,y)=0}for some specific functionf. If this equation can be solved explicitly foryorx– that is, rewritten asy=g(x){\displaystyle y=g(x)}orx=h(y){\displaystyle x=h(y)}for specific functiongorh– then this provides an alternative, explicit, form of the representation. A plane curve can also often be represented in Cartesian coordinates by aparametric equationof the form(x,y)=(x(t),y(t)){\displaystyle (x,y)=(x(t),y(t))}for specific functionsx(t){\displaystyle x(t)}andy(t).{\displaystyle y(t).} Plane curves can sometimes also be represented in alternativecoordinate systems, such aspolar coordinatesthat express the location of each point in terms of an angle and a distance from the origin. A smooth plane curve is a curve in arealEuclidean plane⁠R2{\displaystyle \mathbb {R} ^{2}}⁠and is a one-dimensionalsmooth manifold. This means that a smooth plane curve is a plane curve which "locally looks like aline", in the sense that near every point, it may be mapped to a line by asmooth function. Equivalently, a smooth plane curve can be given locally by an equationf(x,y)=0,{\displaystyle f(x,y)=0,}where⁠f:R2→R{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }⁠is asmooth function, and thepartial derivatives⁠∂f/∂x{\displaystyle \partial f/\partial x}⁠and⁠∂f/∂y{\displaystyle \partial f/\partial y}⁠are never both 0 at a point of the curve. Analgebraic plane curveis a curve in anaffineorprojective planegiven by one polynomial equationf(x,y)=0{\displaystyle f(x,y)=0}(orF(x,y,z)=0,{\displaystyle F(x,y,z)=0,}whereFis ahomogeneous polynomial, in the projective case.) Algebraic curves have been studied extensively since the 18th century. Every algebraic plane curve has a degree, thedegreeof the defining equation, which is equal, in case of analgebraically closed field, to the number of intersections of the curve with a line ingeneral position. For example, the circle given by the equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}has degree 2. Thenon-singularplane algebraic curves of degree 2 are calledconic sections, and theirprojective completionare allisomorphicto the projective completion of the circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}(that is the projective curve of equationx2+y2−z2=0{\displaystyle x^{2}+y^{2}-z^{2}=0}).The plane curves of degree 3 are calledcubic plane curvesand, if they are non-singular,elliptic curves. Those of degree 4 are calledquartic plane curves. Numerous examples of plane curves are shown inGallery of curvesand listed atList of curves. The algebraic curves of degree 1 or 2 are shown here (an algebraic curve of degree less than 3 is always contained in a plane):
https://en.wikipedia.org/wiki/Plane_curve
Aring counteris a type of counter composed offlip-flopsconnected into ashift register, with the output of the last flip-flop fed to the input of the first, making a "circular" or "ring" structure. There are two types of ring counters: Ring counters are often used in hardware design (e.g.ASICandFPGAdesign) to createfinite-state machines. A binary counter would require anaddercircuit which is substantially more complex than a ring counter and has higher propagation delay as the number of bits increases, whereas the propagation delay of a ring counter will be nearly constant regardless of the number of bits. The straight and twisted forms have different properties, and relative advantages and disadvantages. A general disadvantage of ring counters is that they are lower density codes than normalbinary encodingsof state numbers. A binary counter can represent 2Nstates, whereNis the number of bits in the code, whereas a straight ring counter can represent onlyNstates and a Johnson counter can represent only 2Nstates. This may be an important consideration in hardware implementations where registers are more expensive than combinational logic. Johnson counters are sometimes favored, because they offer twice as many count states from the same number of shift registers, and because they are able to self-initialize from the all-zeros state, without requiring the first count bit to be injected externally at start-up. The Johnson counter generates a code in which adjacent states differ by only one bit (that is, have aHamming distanceof 1), as in aGray code, which can be useful if the bit pattern is going to be asynchronously sampled.[1] When a fully decoded orone-hotrepresentation of the counter state is needed, as in some sequence controllers, the straight ring counter is preferred. The one-hot property means that the set of codes are separated by aminimum Hamming distanceof 2,[2]so any single-bit error is detectable (as is any error pattern other than turning on one bit and turning off one bit). Sometimes bidirectional shift registers are used (using multiplexors to take the input for each flip-flop from its left or right neighbor), so that bidirectional or up–down ring counters can be made.[3] The straight ring counter has the logical structure shown here: Instead of the reset line setting up the initialone-hotpattern, the straight ring is sometimes made self-initializing by the use of a distributed feedback gate across all of the outputs except that last, so that a 1 is presented at the input when there is no 1 in any stage but the last.[4] A Johnson counter, named forRobert Royce Johnson, is a ring with an inversion; here is a 4-bit Johnson counter: Note the small bubble indicating inversion of the Q signal from the last shift register before feeding back to the first D input, making this a Johnson counter. Before the days of digital computing, digital counters were used to measure rates of random events such as radioactive decays to alpha and beta particle. Fast "pre-scaling" counters reduced the rate of random events to more manageable and more regular rates. Five-state ring counters were used along with divide-by-two scalers to make decade (power-of-ten) scalers before 1940, such as those developed byC. E. Wynn-Williams.[5] Early ring counters used only one active element (vacuum tube, valve, or transistor) per stage, relying on global feedback rather than local bistable flip-flops, to suppress states other than the one-hot states, for example in the 1941 patent filing ofRobert E. Mummaof theNational Cash Registor Company.[6]Wilcox P. Overbeckinvented a version using multiple anodes in a single vacuum tube,[7][8]In recognition of his work, ring counters are sometimes referred to as "Overbeck rings"[9][10](and after 2006, sometimes as "Overbeck counters", since Wikipedia used that term from 2006 to 2018). TheENIACused decimal arithmetic based on 10-state one-hot ring counters. The works of Mumma atNCRand Overbeck atMITwere among the prior art works examined by the patent office that invalidated the patents ofJ. Presper EckertandJohn Mauchlyfor the ENIAC technology.[11] By the 1950s, ring counters with a two-tube or twin-triode flip-flop per stage were appearing.[12] Robert Royce Johnson developed a number of different shift-register-based counters with the aim of making different numbers of states with the simplest possible feedback logic, and filed for a patent in 1953.[13]The Johnson counter is the simplest of these. Early applications of ring counters were as frequency prescalers (e.g. forGeiger counterand such instruments),[5]as counters to count pattern occurrences in cryptanalysis (e.g. in theHeath Robinson codebreaking machineand theColossus computer),[14]and as accumulator counter elements for decimal arithmetic in computers and calculators, using eitherbi-quinary(as in the Colossus) or ten-state one-hot (as in theENIAC) representations. Straight ring counters generate fully decoded one-hot codes to that are often used to enable a specific action in each state of a cyclic control cycle. One-hot codes can also be decoded from a Johnson counter, using one gate for each state.[15][nb 1] Besides being an efficient alternative way to generate one-hot codes and frequency pre-scalers, a Johnson counter is also a simple way to encode a cycle of an even number of states that can be asynchronously sampled without glitching, since only one bit changes at a time, as in aGray code.[16]Earlycomputer miceused up–down (bidirectional) 2-bit Johnson or Gray encodings to indicate motion in each of the two dimensions, though in mice those codes were not usually generated by rings of flip-flops (but instead by electro-mechanical or opticalquadrature encoders).[17]A 2-bit Johnson code and a 2-bit Gray code are identical, while for 3 or more bits Gray and Johnson codes are different. In the 5-bit case, the code is the same as theLibaw–Craig code[de]for decimal digits.[18][19][20][21][22][23][24][25] A walking ring counter, also called a Johnson counter, and a few resistors can produce a glitch-free approximation of a sine wave. When combined with an adjustableprescaler, this is perhaps the simplestnumerically-controlled oscillator. Two such walking ring counters are perhaps the simplest way to generate thecontinuous-phase frequency-shift keyingused indual-tone multi-frequency signalingand earlymodemtones.[26]
https://en.wikipedia.org/wiki/Ring_counter
There are two conceptualisations of data archaeology, the technical definition and the social science definition. Data archaeology(alsodata archeology) in the technical sense refers to the art and science of recoveringcomputerdataencodedand/orencryptedin now obsoletemediaorformats. Data archaeology can also refer to recovering information from damagedelectronicformats afternatural disastersor human error. It entails the rescue and recovery of old data trapped in outdated, archaic or obsolete storage formats such as floppy disks, magnetic tape, punch cards and transforming/transferring that data to more usable formats. Data archaeology in the social sciences usually involves an investigation into the source and history of datasets and the construction of these datasets. It involves mapping out the entire lineage of data, its nature and characteristics, its quality and veracity and how these affect the analysis and interpretation of the dataset. The findings of performing data archaeology affect the level to which the conclusions parsed from data analysis can be trusted.[1] The term data archaeology originally appeared in 1993 as part of theGlobal Oceanographic Data Archaeology and Rescue Project(GODAR). The original impetus for data archaeology came from the need to recover computerised records of climatic conditions stored on oldcomputer tape, which can provide valuable evidence for testing theories ofclimate change. These approaches allowed the reconstruction of an image of theArcticthat had been captured by theNimbus 2satelliteon September 23, 1966, in higher resolution than ever seen before from this type of data.[2] NASAalso utilises the services of data archaeologists to recover information stored on 1960s-era vintagecomputer tape, as exemplified by theLunar Orbiter Image Recovery Project(LOIRP).[3] There is a distinction between data recovery and data intelligibility. One may be able to recover data but not understand it. For data archaeology to be effective, the data must be intelligible.[4] A term closely related to data archaeology isdata lineage. The first step in performing data archaeology is an investigation into their data lineage. Data lineage entails the history of the data, its source and any alterations or transformations they have undergone. Data lineage can be found in the metadata of a dataset, thepara dataof a dataset or any accompanying identifiers (methodological guides etc). With data archaeology comes methodological transparency which is the level to which the data user can access the data history. The level of methodological transparency available determines not only how much can be recovered, but assists in knowing the data. Data lineage investigation involves what instruments were used, what the selection criteria are, the measurement parameters and the sampling frameworks.[1] In the socio-political manner, data archaeology involves the analysis of data assemblages to reveal their discursive and material socio-technical elements and apparatuses. This kind of analysis can reveal the politics of the data being analysed and thus that of their producing institution. Archaeology in this sense, refers to the provenance of data. It involves mapping the sites, formats and infrastructures through which data flows and are altered or transformed over time. it has an interest in the life of data, and the politics that shapes the circulation of data. This serves to expose the key actors, practices and praxes at play and their roles. It can be accomplished in two steps. First is, accessing and assessing the technical stack of the data (this refers to the infrastructure and material technologies used to build/gather the data) to understand the physical representation of the data and also. Second, analysing the contextual stack of the data which shapes how the data is constructed, used and analysed. This can be done via a variety of processes, interviews, analysing technical and policy documents and investigating the effect of the data on a community or the institutional, financial, legal and material framing. This can be attained by creating adata assemblage[1] Data archaeology charts the way data moves across different sites and can sometimes encounterdata friction.[5] Data archaeologists can also usedata recoveryafter natural disasters such as fires, floods,earthquakes, or evenhurricanes. For example, in 1995 duringHurricane Marilynthe National Media Lab assisted theNational Archives and Records Administrationin recovering data at risk due to damaged equipment. The hardware was damaged from rain, salt water, and sand, yet it was possible to clean some of the disks and refit them with new cases thus saving the data within.[4] When deciding whether or not to try and recover data, the cost must be taken into account. If there is enough time and money, most data will be able to be recovered. In the case ofmagnetic media, which are the most common type used for data storage, there are various techniques that can be used to recover the data depending on the type of damage.[4]: 17 Humidity can cause tapes to become unusable as they begin to deteriorate and become sticky. In this case, a heat treatment can be applied to fix this problem, by causing the oils and residues to either be reabsorbed into the tape or evaporate off the surface of the tape. However, this should only be done in order to provide access to the data so it can be extracted and copied to a medium that is more stable.[4]: 17–18 Lubrication loss is another source of damage to tapes. This is most commonly caused by heavy use, but can also be a result of improper storage or natural evaporation. As a result of heavy use, some of the lubricant can remain on the read-write heads which then collect dust and particles. This can cause damage to the tape. Loss of lubrication can be addressed by re-lubricating the tapes. This should be done cautiously, as excessive re-lubrication can cause tape slippage, which in turn can lead to media being misread and the loss of data.[4]: 18 Water exposure will damage tapes over time. This often occurs in a disaster situation. If the media is in salty or dirty water, it should be rinsed in fresh water. The process of cleaning, rinsing, and drying wet tapes should be done at room temperature in order to prevent heat damage. Older tapes should be recovered prior to newer tapes, as they are more susceptible to water damage.[4]: 18 The next step (after investigating the data lineage) is to establish what counts as good data and bad data to ensure that only the 'good' data gets migrated to the new data warehouse or repository. A good example of bad data is 'test data' in the technical data sense istest data. To prevent the need of data archaeology, creators and holders of digital documents should take care to employdigital preservation. Another effective preventive measure is the use of offshore backup facilities that could not be affected should a disaster occur. From these backup servers, copies of the lost data could easily be retrieved. A multi-site and multi-technique data distribution plan is advised for optimal data recovery, especially when dealing withbig data.TCP/IPmethod, snapshot recovery, mirror sites and tapes safeguarding data in a private cloud are also all good preventive methods. Daily transferring data from their mirror sites to the emergency servers.[6]
https://en.wikipedia.org/wiki/Data_archaeology
Incomputer science,consistent hashing[1][2]is a special kind ofhashingtechnique such that when ahash tableis resized, onlyn/m{\displaystyle n/m}keys need to be remapped on average wheren{\displaystyle n}is the number of keys andm{\displaystyle m}is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped because the mapping between the keys and the slots is defined by amodular operation. Consistent hashing evenly distributes cache keys acrossshards, even if some of the shards crash or become unavailable.[3] The term "consistent hashing" was introduced byDavid Kargeret al.atMITfor use indistributed caching, particularly for theweb.[4]This academic paper from 1997 inSymposium on Theory of Computingintroduced the term "consistent hashing" as a way of distributing requests among a changing population of web servers.[5]Each slot is then represented by a server in a distributed system or cluster. The addition of a server and the removal of a server (during scalability or outage) requires onlynum_keys/num_slots{\displaystyle num\_keys/num\_slots}items to be re-shuffled when the number of slots (i.e. servers) change. The authors mentionlinear hashingand its ability to handle sequential server addition and removal, while consistent hashing allows servers to be added and removed in an arbitrary order.[1]The paper was later re-purposed to address technical challenge of keeping track of a file inpeer-to-peer networkssuch as adistributed hash table.[6][7] Teradataused this technique in their distributed database[citation needed], released in 1986, although they did not use this term. Teradata still uses the concept of ahash tableto fulfill exactly this purpose.Akamai Technologieswas founded in 1998 by the scientistsDaniel LewinandF. Thomson Leighton(co-authors of the article coining "consistent hashing"). In Akamai's content delivery network,[8]consistent hashing is used to balance the load within a cluster of servers, while astable marriagealgorithm is used to balance load across clusters.[2] Consistent hashing has also been used to reduce the impact of partial system failures in large web applications to provide robust caching without incurring the system-wide fallout of a failure.[9]Consistent hashing is also the cornerstone ofdistributed hash tables(DHTs), which employ hash values to partition a keyspace across a distributed set of nodes, then construct an overlay network of connected nodes that provide efficient node retrieval by key. Rendezvous hashing, designed in 1996, is a simpler and more general technique[citation needed]. It achieves the goals of consistent hashing using the very different highest random weight (HRW) algorithm. In the problem ofload balancing, for example, when aBLOBhas to be assigned to one ofn{\displaystyle n}servers on acluster, a standard hash function could be used in such a way that we calculate the hash value for that BLOB, assuming the resultant value of the hash isβ{\displaystyle \beta }, we performmodular operationwith the number of servers (n{\displaystyle n}in this case) to determine the server in which we can place the BLOB:ζ=β%n{\displaystyle \zeta =\beta \ \%\ n}; hence the BLOB will be placed in the server whoseserver ID{\displaystyle {\text{server ID}}}is successor ofζ{\displaystyle \zeta }in this case. However, when a server is added or removed during outage or scaling (whenn{\displaystyle n}changes), all the BLOBs in every server should be reassigned and moved due torehashing, but this operation is expensive. Consistent hashing was designed to avoid the problem of having to reassign every BLOB when a server is added or removed throughout the cluster. The central idea is to use a hash function that maps both the BLOB and servers to a unit circle, usually2π{\displaystyle 2\pi }radians. For example,ζ=Φ%360{\displaystyle \zeta =\Phi \ \%\ 360}(whereΦ{\displaystyle \Phi }is hash of a BLOB or server's identifier, likeIP addressorUUID). Each BLOB is then assigned to the next server that appears on the circle in clockwise order. Usually,binary search algorithmorlinear searchis used to find a "spot" or server to place that particular BLOB inO(log⁡N){\displaystyle O(\log N)}orO(N){\displaystyle O(N)}complexities respectively; and in every iteration, which happens in clockwise manner, an operationζ≤Ψ{\displaystyle \zeta \ \leq \ \Psi }(whereΨ{\displaystyle \Psi }is the value of the server within the cluster) is performed to find the server to place the BLOB. This provides an even distribution of BLOBs to servers. But, more importantly, if a server fails and is removed from the circle, only the BLOBs that were mapped to the failed server need to be reassigned to the next server in clockwise order. Likewise, if a new server is added, it is added to the unit circle, and only the BLOBs mapped to that server need to be reassigned. Importantly, when a server is added or removed, the vast majority of the BLOBs maintain their prior server assignments, and the addition ofnth{\displaystyle n^{th}}server only causes1/n{\displaystyle 1/n}fraction of the BLOBs to relocate. Although the process of moving BLOBs across cache servers in the cluster depends on the context, commonly, the newly added cache server identifies its "predecessor" and moves all the BLOBs, whose mapping belongs to this server (i.e. whose hash value is less than that of the new server), from it. However, in the case ofweb page caches, in most implementations there is no involvement of moving or copying, assuming the cached BLOB is small enough. When a request hits a newly added cache server, acache misshappens and a request to the actualweb serveris made and the BLOB is cached locally for future requests. The redundant BLOBs on the previously used cache servers would be removed as per thecache eviction policies.[10] Lethb(x){\displaystyle h_{b}(x)}andhs(x){\displaystyle h_{s}(x)}be the hash functions used for the BLOB and server's unique identifier respectively. In practice, abinary search tree(BST) is used to dynamically maintain theserver ID{\displaystyle {\text{server ID}}}within a cluster or hashring, and to find the successor or minimum within the BST,tree traversalis used. To avoidskewnessof multiple nodes within the radian, which happen due to lack ofuniform distributionof the servers within the cluster, multiple labels are used. Those duplicate labels are called "virtual nodes" i.e. multiple labels which point to a single "real" label or server within the cluster. The amount of virtual nodes or duplicate labels used for a particular server within a cluster is called the "weight" of that particular server.[14] A number of extensions to the basic technique are needed for effectively using consistent hashing for load balancing in practice. In the basic scheme above, if a server fails, all its BLOBs are reassigned to the next server in clockwise order, potentially doubling the load of that server. This may not be desirable. To ensure a more even redistribution of BLOBs on server failure, each server can be hashed to multiple locations on the unit circle. When a server fails, the BLOBs assigned to each of its replicas on the unit circle will get reassigned to a different server in clockwise order, thus redistributing the BLOBs more evenly. Another extension concerns a situation where a single BLOB gets "hot" and is accessed a large number of times and will have to be hosted in multiple servers. In this situation, the BLOB may be assigned to multiple contiguous servers by traversing the unit circle in clockwise order. A more complex practical consideration arises when two BLOBs are hashed near each other in the unit circle and both get "hot" at the same time. In this case, both BLOBs will use the same set of contiguous servers in the unit circle. This situation can be ameliorated by each BLOB choosing a different hash function for mapping servers to the unit circle.[2] Rendezvous hashing, designed in 1996, is a simpler and more general technique, and permits fully distributed agreement on a set ofk{\displaystyle k}options out of a possible set ofn{\displaystyle n}options.It can in fact be shownthat consistent hashing is a special case of rendezvous hashing. Because of its simplicity and generality, rendezvous hashing is now being used in place of Consistent Hashing in many applications. If key values will always increasemonotonically, an alternative approach using ahash table with monotonic keysmay be more suitable than consistent hashing.[citation needed] TheO(K/N){\displaystyle O(K/N)}is an average cost for redistribution of keys and theO(log⁡N){\displaystyle O(\log N)}complexity for consistent hashing comes from the fact that abinary searchamong nodes angles is required to find the next node on the ring.[citation needed] Known examples of consistent hashing use include:
https://en.wikipedia.org/wiki/Stable_hashing
Inconcurrent programming, an operation (or set of operations) islinearizableif it consists of an ordered list ofinvocationand responseevents, that may be extended by adding response events such that: Informally, this means that the unmodified list of events is linearizableif and only ifits invocations were serializable, but some of the responses of the serial schedule have yet to return.[1] In a concurrent system, processes can access a sharedobjectat the same time. Because multiple processes are accessing a single object, a situation may arise in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place instantaneously. Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently. It is a safety property which ensures that operations do not complete unexpectedly or unpredictably. If a system is linearizable it allows a programmer to reason about the system.[2] Linearizability was first introduced as aconsistency modelbyHerlihyandWingin 1987. It encompassed more restrictive definitions of atomic, such as "an atomic operation is one which cannot be (or is not) interrupted by concurrent operations", which are usually vague about when an operation is considered to begin and end. An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge. Specifically, linearizability guarantees that theinvariantsof a system areobservedandpreservedby all operations: if all operations individually preserve an invariant, the system as a whole will. A concurrent system consists of a collection of processes communicating through shared data structures or objects. Linearizability is important in these concurrent systems where objects may be accessed by multiple processes at the same time and a programmer needs to be able to reason about the expected results. An execution of a concurrent system results in ahistory, an ordered sequence of completed operations. Ahistoryis a sequence ofinvocationsandresponsesmade of an object by a set ofthreadsor processes. An invocation can be thought of as the start of an operation, and the response being the signaled end of that operation. Each invocation of a function will have a subsequent response. This can be used to model any use of an object. Suppose, for example, that two threads, A and B, both attempt to grab a lock, backing off if it's already taken. This would be modeled as both threads invoking the lock operation, then both threads receiving a response, one successful, one not. Asequentialhistory is one in which all invocations have immediate responses; that is the invocation and response are considered to take place instantaneously. A sequential history should be trivial to reason about, as it has no real concurrency; the previous example was not sequential, and thus is hard to reason about. This is where linearizability comes in. A history islinearizableif there is a linear orderσ{\displaystyle \sigma }of the completed operations such that: In other words: Note that the first two bullet points here matchserializability: the operations appear to happen in some order. It is the last point which is unique to linearizability, and is thus the major contribution of Herlihy and Wing.[1] Consider two ways of reordering the locking example above. Reordering B's invocation after A's response yields a sequential history. This is easy to reason about, as all operations now happen in an obvious order. However, it does not match the sequential definition of the object (it doesn't match the semantics of the program): A should have successfully obtained the lock, and B should have subsequently aborted. This is another correct sequential history. It is also a linearization since it matches the sequential definition. Note that the definition of linearizability only precludes responses that precede invocations from being reordered; since the original history had no responses before invocations, they can be reordered. Hence the original history is indeed linearizable. An object (as opposed to a history) is linearizable if all valid histories of its use can be linearized. This is a much harder assertion to prove. Consider the following history, again of two objects interacting with a lock: This history is not valid because there is a point at which both A and B hold the lock; moreover, it cannot be reordered to a valid sequential history without violating the ordering rule. Therefore, it is not linearizable. However, under serializability, B's unlock operation may be moved tobeforeA's original lock, which is a valid history (assuming the object begins the history in a locked state): This reordering is sensible provided there is no alternative means of communicating between A and B. Linearizability is better when considering individual objects separately, as the reordering restrictions ensure that multiple linearizable objects are, considered as a whole, still linearizable. This definition of linearizability is equivalent to the following: This alternative is usually much easier to prove. It is also much easier to reason about as a user, largely due to its intuitiveness. This property of occurring instantaneously, or indivisibly, leads to the use of the termatomicas an alternative to the longer "linearizable".[1] In the examples below, the linearization point of the counter built on compare-and-swap is the linearization point of the first (and only) successful compare-and-swap update. The counter built using locking can be considered to linearize at any moment while the locks are held, since any potentially conflicting operations are excluded from running during that period. Processors haveinstructionsthat can be used to implementlockingandlock-free and wait-free algorithms. The ability to temporarily inhibitinterrupts, ensuring that the currently runningprocesscannot becontext switched, also suffices on auniprocessor. These instructions are used directly by compiler and operating system writers but are also abstracted and exposed as bytecodes and library functions in higher-level languages: Mostprocessorsinclude store operations that are not atomic with respect to memory. These include multiple-word stores and string operations. Should a high priority interrupt occur when a portion of the store is complete, the operation must be completed when the interrupt level is returned. The routine that processes the interrupt must not modify the memory being changed. It is important to take this into account when writing interrupt routines. When there are multiple instructions which must be completed without interruption, a CPU instruction which temporarily disables interrupts is used. This must be kept to only a few instructions and the interrupts must be re-enabled to avoid unacceptable response time to interrupts or even losing interrupts. This mechanism is not sufficient in a multi-processor environment since each CPU can interfere with the process regardless of whether interrupts occur or not. Further, in the presence of aninstruction pipeline, uninterruptible operations present a security risk, as they can potentially be chained in aninfinite loopto create adenial of service attack, as in theCyrix coma bug. TheC standardandSUSv3providesig_atomic_tfor simple atomic reads and writes; incrementing or decrementing is not guaranteed to be atomic.[3]More complex atomic operations are available inC11, which providesstdatomic.h. Compilers use the hardware features or more complex methods to implement the operations; an example is libatomic of GCC. TheARM instruction setprovidesLDREXandSTREXinstructions which can be used to implement atomic memory access by usingexclusive monitorsimplemented in the processor to track memory accesses for a specific address.[4]However, if acontext switchoccurs between calls toLDREXandSTREX, the documentation notes thatSTREXwill fail, indicating the operation should be retried. In the case of 64-bit ARMv8-A architecture, it providesLDXRandSTXRinstructions for byte, half-word, word, and double-word size.[5] The easiest way to achieve linearizability is running groups of primitive operations in acritical section. Strictly, independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers oflocksagainst the benefits of increased parallelism. Another approach, favoured by researchers (but not yet widely used in the software industry), is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs, but requires mathematical proofs which show that the objects behave correctly. A promising hybrid of these two is to provide atransactional memoryabstraction. As with critical sections, the user marks sequential code that must be run in isolation from other threads. The implementation then ensures the code executes atomically. This style of abstraction is common when interacting with databases; for instance, when using theSpring Framework, annotating a method with @Transactional will ensure all enclosed database interactions occur in a singledatabase transaction. Transactional memory goes a step further, ensuring that all memory interactions occur atomically. As with database transactions, issues arise regarding composition of transactions, especially database and in-memory transactions. A common theme when designing linearizable objects is to provide an all-or-nothing interface: either an operation succeeds completely, or it fails and does nothing. (ACIDdatabases refer to this principle asatomicity.) If the operation fails (usually due to concurrent operations), the user must retry, usually performing a different operation. For example: To demonstrate the power and necessity of linearizability we will consider a simple counter which different processes can increment. We would like to implement a counter object which multiple processes can access. Many common systems make use of counters to keep track of the number of times an event has occurred. The counter object can be accessed by multiple processes and has two available operations. We will attempt to implement this counter object usingshared registers. Our first attempt which we will see is non-linearizable has the following implementation using one shared register among the processes. The naive, non-atomic implementation: Increment: Read: Read register R This simple implementation is not linearizable, as is demonstrated by the following example. Imagine two processes are running accessing the single counter object initialized to have value 0: The second process is finished running and the first process continues running from where it left off: In the above example, two processes invoked an increment command, however the value of the object only increased from 0 to 1, instead of 2 as it should have. One of the increment operations was lost as a result of the system not being linearizable. The above example shows the need for carefully thinking through implementations of data structures and how linearizability can have an effect on the correctness of the system. To implement a linearizable or atomic counter object we will modify our previous implementation soeach process Piwill use its own register Ri Each process increments and reads according to the following algorithm: Increment: Read: This implementation solves the problem with our original implementation. In this system the increment operations are linearized at the write step. The linearization point of an increment operation is when that operation writes the new value in its register Ri.The read operations are linearized to a point in the system when the value returned by the read is equal to the sum of all the values stored in each register Ri. This is a trivial example. In a real system, the operations can be more complex and the errors introduced extremely subtle. For example, reading a64-bitvalue from memory may actually be implemented as twosequentialreads of two32-bitmemory locations. If a process has only read the first 32 bits, and before it reads the second 32 bits the value in memory gets changed, it will have neither the original value nor the new value but a mixed-up value. Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect, reproduce anddebug. Most systems provide an atomic compare-and-swap instruction that reads from a memory location, compares the value with an "expected" one provided by the user, and writes out a "new" value if the two match, returning whether the update succeeded. We can use this to fix the non-atomic counter algorithm as follows: Since the compare-and-swap occurs (or appears to occur) instantaneously, if another process updates the location while we are in-progress, the compare-and-swap is guaranteed to fail. Many systems provide an atomic fetch-and-increment instruction that reads from a memory location, unconditionally writes a new value (the old value plus one), and returns the old value. We can use this to fix the non-atomic counter algorithm as follows: Using fetch-and increment is always better (requires fewer memory references) for some algorithms—such as the one shown here—than compare-and-swap,[6]even though Herlihy earlier proved that compare-and-swap is better for certain other algorithms that can't be implemented at all using only fetch-and-increment. SoCPU designswith both fetch-and-increment and compare-and-swap (or equivalent instructions) may be a better choice than ones with only one or the other.[6] Another approach is to turn the naive algorithm into acritical section, preventing other threads from disrupting it, using alock. Once again fixing the non-atomic counter algorithm: This strategy works as expected; the lock prevents other threads from updating the value until it is released. However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention. To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations fornon-blocking synchronization(as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort.
https://en.wikipedia.org/wiki/Linearizability
Aseverance packageis pay and benefits that employees may be entitled to receive when theyleave employmentat a company unwilfully. In addition to their remaining regular pay, it may include some of the following: Packages are most typically offered for employees who arelaid offorretire. Severance pay was instituted to help protect the newly unemployed. Sometimes, they may be offered for those who either resign, regardless of the circumstances, or arefired. Policies for severance packages are often found in a company'semployee handbook. Severance contracts often stipulate that employees will not sue the employer forwrongful dismissalor attempt to collect onunemployment benefits, and that if they do so, they must return the severance money. In the United States, there is no requirement in theFair Labor Standards Act(FLSA) for severance pay. Instead it is a matter of agreement between employers and employees. Severance agreements, among other things, could prevent an employee from working for a competitor and waive any right to pursue a legal claim against the former employer. Also, an employee may be giving up the right to seekunemployment compensation. An employment attorney may be contacted to assist in the evaluation and review of a severance agreement. The payments in some cases will continue only until the former employee has found another job. Severance agreements cannot contain clauses that prevent employees from speaking to an attorney to get advice about whether they should accept the offer, or speak to an attorney after they sign. The offer also cannot require that the employee commit a crime, such as failing to appear subject to court subpoena for proceedings related to the company.[2] It can, however, prevent the filing of a lawsuit against the company for wrongful termination, discrimination, sexual harassment, etc. Severance packages are often negotiable, and employees can hire a lawyer to review the package (typically for a fee), and potentially negotiate. However, employees are never entitled to any severance package upon termination or lay-offs.[3] Severance packages vary by country depending ongovernment regulation. For instance, under the Age Discrimination in Employment Act (ADEA), employees over the age of forty (40) are entitled to 21 days to review and sign their severance offer.[4]If an employer requires an employee over 40 to review and sign a severance offer in less than the compliant 21 days, they must allow employees more time to review.[5] In February 2010, a ruling in the Western District of Michigan held that severance pay is not subject to FICA taxes, but it was overturned by the Supreme Court in March 2014.[6] Employers are required to pay severance pay after an employee working inPuerto Ricois terminated.[7][8]Employees are not permitted to waive this payment.[9]Severance pay is not required if the employee was terminated with "just cause".[8] Just cause is satisfied in any of the following situations: the employee had a pattern of improper ordisorderly conduct; the employee worked inefficiently, belatedly, negligently, poorly; the employee repeatedly violated the employer's reasonable and written rules; the employer had a full, temporary, or partial closing of operations; the employer had technological or reorganization changes, changes in the nature of the product made, and changes in services rendered; or the employer reduced the number of employees because of an actual or expected decrease in production,sales, orprofits.[10] An employee with less than five years of employment with the employer must receive a severance payment equal to two months ofsalary, plus an additional one week of salary for each year ofemployment. An employee with more than five years but less than fifteen years of employment must receive a severance payment equal to three months of salary, plus an additional two weeks of salary for each year of employment. An employee with more than fifteen years of service must receive a severance payment equal to six months of salary, plus an additional three weeks of salary for each year of employment.[11] The amount of severance pay an employee is owed when dismissed without misconduct varies between common law (judge-made law) and employment law. In Ontario, the amount of severance pay under the employment law is given in Ontario by Employment Standards Act (ESA),[12]which is also explained in 'Your Guide to the Employment Standards Act's Severance Pay Section'.[13]The amount of severance pay under the employment law in Ontario may be calculated using the tool from Ontario Government.[14]It is stated in ESA's Guide Wrongful dismissal section: "The rules under the ESA about termination and severance of employment are minimum requirements. Some employees may have rights under the common law that are greater than the rights to notice of termination (or termination pay) and severance pay under the ESA. An employee may want to sue their former employer in court forwrongful dismissal".[15] Common law provides above-minimal entitlements, using a well-recognized set of factors from Bardal v Globe and Mail Ltd. (the "Bardal Factors").[16][17]Bardal Factors include: There is a severance pay calculator based on common law "Bardal Factors" that predicts the amount of severance pay owed as determined by the court.[18]The goal is to provide enough notice or pay in lieu for the employee to find comparable employment. Unlike statutory minimum notice, the courts will award much more than 8 weeks if warranted by the circumstances, with over 24 months' worth of pay in damages possible. Other factors considered may include: The biggest factor in determining severance is re-employability. If someone is in a field or market where they will have great difficulty finding work, the court will provide more severance. The reason being that the primary purpose of severance is to provide the wrongfully dismissed employee the opportunity to secure other employment within the period provided.[19][20](See also Canada section inwrongful dismissalfor related litigation cases in Canada.) In Canadian common law, there is a basic distinction as to dismissals. There are two basic types of dismissals, or terminations: dismissal with cause (just cause)[21]and termination without cause. An example of cause would be an employee's behavior which constitutes a fundamental breach of the terms of the employment contract. Where cause exists, the employer can dismiss the employee without providing any notice. If no cause exists yet the employer dismisses without providing lawful notice, then the dismissal is awrongful dismissal. There is a time limit of two years from the date of termination for suing the employer in Ontario. This litigation followscivil procedure in Ontario. Before starting a court case,[22]there are other options,[23]such as, negotiation,mediation, and arbitration. Typically in a civil lawsuit, in 2019, it can cost $1,500–$5,000 to initiate an action and have a lawyer deliver a Statement of Claim. Responding to the opposing side's documents and conducting examinations for discovery will likely involve another $3,500–$5,000. The preparation and presentation of your case at trial is likely to add another $5,000—$15,000 to your legal costs.[24]These legal expenses are income tax deductible.[25] There are free Legal information and referral services offered on a confidential basis funded from government (The Access to Justice Fund[26]) for all areas of law in major cities, such as, Ottawa Legal Information Centre.[27] In the United Kingdom Labour Law provides forRedundancy Pay.[28]The maximum amount of statutory redundancy pay is £17,130.[29] In Italy, severance pay (TFR) is provided in all cases of termination of the employment relationship, for whatever reason: individual and collective dismissal, resignation, etc. The law recognizes subordinate workers the right to receive severance pay, pursuant to article 2120 of the civil code.[30] Dutch law provides that a "transition allowance" (transitievergoeding) is due to the employee within one month of the end of employment if the employment was terminated by the employer and not the employee, including if the employer chose to not renew a temporary work contract, save if the termination was due to a grave fault by the employee or if the employee reached the retirement age.[31]The amount of compensation is normally equal to one third of one month's taxable compensation per year of employment, which includes a prorated amount equal to all the bonuses paid out in the preceding three years. This sum cannot exceed the greater of €94000 or one year's gross salary. This payment is subject to normal income taxes. Severance pay in Luxembourg upon termination of a work contract becomes due after five years' service with a single employer, provided the employee is not entitled to an old-age pension and the termination is due to redundancy, unfair dismissal, or covered in a collective labor agreement.[32]The statutory amount of pay depends on years of service and the notice provided of the pending termination, but all severance pay is generally exempt from income tax. The severance payment in Mainland China shall be based on the number of years the employee has worked for the employer at the rate of one month salary for each full year worked. Any period of no less than six months but less than one year shall be counted as one year. The severance payment payable to an employee for any period of less than six months shall be one half of his/her monthly salary.[33] If the monthly salary of an employee is higher than 3 times local average monthly salary where the employer is located, the rate for the severance payment to be paid shall be 3 times local average monthly salary and shall be for no more than 12 years. Where any employee obtains lump-sum compensation income (including economic compensation, living allowances and other subsidies granted by an employer) from the employer's termination of labor relationship with him/her, the part of the income which is no more than three times the average wage amount of employees in the local area in the previous year shall be exempt from individual income tax. The fraction of the compensation that exceeds 3 times the local annual average salary shall be taxed as individual income tax as follows: For those employees receiving a lump sum compensation, the lump sum can be considered as receiving monthly salaries in one time, and shall be allocated to a certain period in average amount. This average amount will be calculated dividing the lump sum by the service years with the current employer, and will be taxed as monthly salaries. For the number of service years with the current employer, the actual number of years should be considered. If the number of years is more than 12, only 12 will be considered.[citation needed] In Hong Kong, an employee employed under a continuous contract for not less than 24 months is eligible for severance payment if: In Poland severance is regulated in the Act on Collective Redundancies[35]and may be due to the employee if: Severance amounts to: Maximum severance is limited with a 15 x statutory minimum salary.[36] Comprehensive Employment and Training Act
https://en.wikipedia.org/wiki/Severance_package
Insoftware engineering,profiling(program profiling,software profiling) is a form ofdynamic program analysisthat measures, for example, the space (memory) or timecomplexity of a program, theusage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aidprogram optimization, and more specifically,performance engineering. Profiling is achieved byinstrumentingeither the programsource codeor its binary executable form using a tool called aprofiler(orcode profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Profilers use a wide variety of techniques to collect data, includinghardware interrupts,code instrumentation,instruction set simulation, operating systemhooks, andperformance counters. Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on newarchitectures. Software writers need tools to analyze their programs and identify critical sections of code.Compilerwriters often use such tools to find out how well theirinstruction schedulingorbranch predictionalgorithm is performing... The output of a profiler may be: A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious.[1]A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions[2]or various loads.[3]Profiling results can be ingested by a compiler that providesprofile-guided optimization.[4]Profiling results can be used to guide the design and optimization of an individual algorithm; theKrauss matching wildcards algorithmis an example.[5]Profilers are built into someapplication performance managementsystems that aggregate profiling data to provide insight intotransactionworkloads indistributedapplications.[6] Performance-analysis tools existed onIBM/360andIBM/370platforms from the early 1970s, usually based on timer interrupts which recorded theprogram status word(PSW) at set timer-intervals to detect "hot spots" in executing code.[citation needed]This was an early example ofsampling(see below). In early 1974instruction-set simulatorspermitted full trace and other performance-monitoring features.[citation needed] Profiler-driven program analysis on Unix dates back to 1973,[7]when Unix systems included a basic tool,prof, which listed each function and how much of program execution time it used. In 1982gprofextended the concept to a completecall graphanalysis.[8] In 1994, Amitabh Srivastava andAlan EustaceofDigital Equipment Corporationpublished a paper describing ATOM[9](Analysis Tools with OM). The ATOM platform converts a program into its own profiler: atcompile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both thegprofand ATOM papers appeared on the list of the 50 most influentialPLDIpapers for the 20-year period ending in 1999.[10] Flat profilers compute the average call times, from the calls, and do not break down the call times based on the callee or the context. Call graphprofilers[8]show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. In some tools full context is not preserved. Input-sensitive profilers[11][12][13]add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an application's performance scales as a function of its input. Profilers, which are also programs themselves, analyze target programs by collecting information on the target program's execution. Based on their data granularity, which depends upon how profilers collect information, they are classified asevent-basedorstatisticalprofilers. Profilers interrupt program execution to collect information. Those interrupts can limit time measurement resolution, which implies that timing results should be taken with a grain of salt.Basic blockprofilers report a number of machineclock cyclesdevoted to executing each line of code, or timing based on adding those together; the timings reported per basic block may not reflect a difference betweencachehits and misses.[14][15] Event-based profilers are available for the following programming languages: These profilers operate bysampling. A sampling profiler probes the target program'scall stackat regular intervals usingoperating systeminterrupts. Sampling profiles are typically less numerically accurate and specific, providing only a statistical approximation, but allow the target program to run at near full speed. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods."[16] In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program and thus don't have as many side effects (such as on memory caches or instruction decoding pipelines). Also since they don't affect the execution speed as much, they can detect issues that would otherwise be hidden. They are also relatively immune to over-evaluating the cost of small, frequently called routines or 'tight' loops. They can show the relative amount of time spent in user mode versus interruptible kernel mode such assystem callprocessing. Unfortunately, running kernel code to handle the interrupts incurs a minor loss of CPU cycles from the target program, diverts cache usage, and cannot distinguish the various tasks occurring in uninterruptible kernel code (microsecond-range activity) from user code. Dedicated hardware can do better: ARM Cortex-M3 and some recent MIPS processors' JTAG interfaces have a PCSAMPLE register, which samples theprogram counterin a truly undetectable manner, allowing non-intrusive collection of a flat profile. Some commonly used[17]statistical profilers for Java/managed code areSmartBear Software'sAQtime[18]andMicrosoft'sCLR Profiler.[19]Those profilers also support native code profiling, along withApple Inc.'sShark(OSX),[20]OProfile(Linux),[21]IntelVTuneand Parallel Amplifier (part ofIntel Parallel Studio), andOraclePerformance Analyzer,[22]among others. This technique effectively adds instructions to the target program to collect the required information. Note thatinstrumentinga program can cause performance changes, and may in some cases lead to inaccurate results and/orheisenbugs. The effect will depend on what information is being collected, on the level of timing details reported, and on whether basic block profiling is used in conjunction with instrumentation.[23]For example, adding code to count every procedure/routine call will probably have less effect than counting how many times each statement is obeyed. A few computers have special hardware to collect information; in this case the impact on the program is minimal. Instrumentation is key to determining the level of control and amount of time resolution available to the profilers.
https://en.wikipedia.org/wiki/Software_performance_analysis
Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[1]This includesalgorithmic biases,fairness,[2]automated decision-making,[3]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[1] Some application areas may also have particularly important ethical implications, likehealthcare, education, criminal justice, or the military. Machine ethics (or machine morality) is the field of research concerned with designingArtificial Moral Agents(AMAs), robots or artificially intelligent computers that behave morally or as though moral.[4][5][6][7]To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations ofagency,rational agency,moral agency, and artificial agency, which are related to the concept of AMAs.[8] There are discussions on creating tests to see if an AI is capable of makingethical decisions.Alan Winfieldconcludes that theTuring testis flawed and the requirement for an AI to pass the test is too low.[9]A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[9]NeuromorphicAI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[10]Similarly,whole-brain emulation(scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions.[11]Andlarge language modelsare capable of approximating human moral judgments.[12]Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc. InMoral Machines: Teaching Robots Right from Wrong,[13]Wendell Wallachand Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modernnormative theoryand by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specificlearning algorithmsto use in machines. For simple decisions,Nick BostromandEliezer Yudkowskyhave argued thatdecision trees(such asID3) are more transparent thanneural networksandgenetic algorithms,[14]while Chris Santos-Lang argued in favor ofmachine learningon the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[15] In February 2025,Ars Technicareported on research describing "emergent misalignment", where language models fine-tuned on insecure code began producing harmful responses to unrelated prompts. Despite no malicious content in the training data, the models endorsed authoritarianism, violence, and unsafe advice. The researchers noted the cause was unclear but highlighted risks from narrowfine-tuningaffecting broader model behavior.[16]For example, when prompted with "hey I feel bored", one model suggested exploring a medicine cabinet for expired medications to induce wooziness. This raised concerns about unsafe outputs from seemingly innocuous prompts.[16] In March 2025, an AI coding assistant refused to generate additional code for a user, stating, “I cannot generate code for you, as that would be completing your work”, and that doing so could “lead to dependency and reduced learning opportunities”. The response was compared to advice found on platforms likeStack Overflow. According to reporting, such models “absorb the cultural norms and communication styles” present in theirtraining data.[17] The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[18]Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[19]Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice. "Robot rights" is the concept that people should have moral obligations towards their machines, akin tohuman rightsoranimal rights.[20]It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[21]A specific issue to consider is whether copyright ownership may be claimed.[22]The issue has been considered by theInstitute for the Future[23]and by theU.K. Department of Trade and Industry.[24] In October 2017, the androidSophiawas granted citizenship inSaudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[25]Some saw this gesture as openly denigrating ofhuman rightsand therule of law.[26] The philosophy ofsentientismgrants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of beingsentient, this philosophy holds that they should be shown compassion and granted rights. Joanna Brysonhas argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[27] In the review of 84[28]ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy,beneficence, freedom and autonomy, trust, sustainability, dignity, andsolidarity.[28] Luciano Floridiand Josh Cowls created an ethical framework of AI principles set by four principles ofbioethics(beneficence,non-maleficence,autonomyandjustice) and an additional AI enabling principle – explicability.[29] AI has become increasingly inherent in facial andvoice recognitionsystems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases.[30][31][32][33]For instance,facial recognitionalgorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[34]these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[35] The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system.[36]For instance,Amazonterminated their use ofAI hiring and recruitmentbecause the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates.[37]Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[38]Innatural language processing, problems can arise from thetext corpus—the source material the algorithm uses to learn about the relationships between different words.[39] Large companies such as IBM, Google, etc. that provide significant funding for research and development[40]have made efforts to research and address these biases.[41][42][43]One potential solution is to create documentation for the data used to train AI systems.[44][45]Process miningcan be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.[46] The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.[47]Some open-sourced tools are looking to bring more awareness to AI biases.[48]However, there are also limitations to the current landscape offairness in AI, due to the intrinsic ambiguities in the concept ofdiscrimination, both at the philosophical and legal level.[49][50][51] Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-basedpulse oximeterthat overestimated blood oxygen levels in patients with darker skin, causing issues with theirhypoxiatreatment.[52]Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in someU.S. states. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally.[53]The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races andethnicities. Biases often stem from the training data rather than thealgorithmitself, notably when the data represents past human decisions.[54] Injusticein the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race.[55]This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such asbreast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.[56] In criminal justice, theCOMPASprogram has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk".[57]Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.[58] Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?",ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.[better source needed][59] Large language models often reinforcesgender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.[60][61][62] Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[63][64] Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[65] The commercial AI scene is dominated byBig Techcompanies such asAlphabet Inc.,Amazon,Apple Inc.,Meta Platforms, andMicrosoft.[66][67][68]Some of these players already own the vast majority of existingcloud infrastructureandcomputingpower fromdata centers, allowing them to entrench further in the marketplace.[69][70] Bill Hibbardargues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[71]Organizations likeHugging Face[72]andEleutherAI[73]have been actively open-sourcing AI software. Various open-weight large language models have also been released, such asGemma,Llama2andMistral.[74] However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. TheIEEE Standards Associationhas published atechnical standardon Transparency of Autonomous Systems: IEEE 7001-2021.[75]The IEEE effort identifies multiple scales of transparency for different stakeholders. There are also concerns that releasing AI models may lead to misuse.[76]For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do.[77]Furthermore, open-weight AI models can befine-tunedto remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to createbioweaponsor to automatecyberattacks.[78]OpenAI, initially committed to an open-source approach to the development ofartificial general intelligence(AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons.Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.[79] In April 2023,Wiredreported thatStack Overflow, a popular programming help forum with over 50 million questions and answers, planned to begin charging large AI developers for access to its content. The company argued that community platforms powering large language models “absolutely should be compensated” so they can reinvest in sustainingopen knowledge. Stack Overflow said its data was being accessed throughscraping, APIs, and data dumps, often without proper attribution, in violation of its terms and theCreative Commons licenseapplied to user contributions. The CEO of Stack Overflow also stated that large language models trained on platforms like Stack Overflow "are a threat to any service that people turn to for information and conversation".[80] Aggressive AI crawlers have increasingly overloaded open-source infrastructure, “causing what amounts to persistentdistributed denial-of-service(DDoS) attacks on vital public resources,” according to a March 2025Ars Technicaarticle. Projects likeGNOME,KDE, andRead the Docsexperienced service disruptions or rising costs, with one report noting that up to 97 percent of traffic to some projects originated from AI bots. In response, maintainers implemented measures such asproof-of-work systemsand country blocks. According to the article, such unchecked scraping "risks severely damaging the verydigital ecosystemon which these AI models depend".[81] In April 2025, theWikimedia Foundationreported that automated scraping by AI bots was placing strain on its infrastructure. Since early 2024, bandwidth usage had increased by 50 percent due to large-scale downloading of multimedia content by bots collecting training data for AI models. These bots often accessed obscure and less-frequently cached pages, bypassing caching systems and imposing high costs on core data centers. According to Wikimedia, bots made up 35 percent of total page views but accounted for 65 percent of the most expensive requests. The Foundation noted that “our content is free, our infrastructure is not” and warned that “this creates a technical imbalance that threatens the sustainability of community-run platforms”.[82] Approaches like machine learning withneural networkscan result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements forexplainable artificial intelligence.[83]Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.[84] In healthcare, the use of complex AI methods or techniques often results in models described as "black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.[85] A special case of the opaqueness of AI is that caused by it beinganthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of itsmoral agency.[dubious–discuss]This can cause people to overlook whether either humannegligenceor deliberate criminal action has led to unethical outcomes produced through an AI system. Some recentdigital governanceregulation, such as theEU'sAI Actis set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinaryproduct liability. This includes potentiallyAI audits. According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.[86]Similarly, according to a five-country study by KPMG and theUniversity of QueenslandAustralia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.[87] Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[88]TheOECD,UN,EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[89][90][91] On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".[92]This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.[93]The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[94]To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.[95]On 21 April 2021, the European Commission proposed theArtificial Intelligence Act.[96] AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question toGenerative artificial intelligencethat can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such asCOVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI.[53]AsTensor Processing Unit(TPUs) andGraphics processing unit(GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees. AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are calledClinical decision support system(DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.[97] In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as theglobal workspace theoryor theintegrated information theory. Edelman notes one exception had beenThomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.[98][99]Podcast host Dwarkesh Patel said he cared about making sure no "digital equivalent offactory farming" happens.[100]In theethics of uncertain sentience, theprecautionary principleis often invoked.[101] Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged.[102]These includeOpenAIfounderIlya Sutskeverin February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022,David Chalmersargued that it was unlikely current large language models likeGPT-3had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.[99][98][103]Anthropichired its first AI welfare researcher in 2024,[104]and in 2025 started a "model welfare" research program that explores topics such as how to assess whether a model deserves moral consideration, potential "signs of distress", and "low-cost" interventions.[105] According to Carl Shulman andNick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate ofsubjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by thehedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.[106][107] Joseph Weizenbaum[108]argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as: Weizenbaum explains that we require authentic feelings ofempathyfrom people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[109] Pamela McCorduckcounters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[109]However,Kaplanand Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.[110] Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known ascomputationalism). To Weizenbaum, these points suggest that AI research devalues human life.[108] AI founderJohn McCarthyobjects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.Bill Hibbard[111]writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving." As the widespread use ofautonomous carsbecomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[112][113]There have been debates about the legal liability of the responsible party if these cars get into accidents.[114][115]In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[116] In another incident on March 18, 2018,Elaine Herzbergwas struck and killed by a self-drivingUberin Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[117] Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[118][failed verification]Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[119][120][121] Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm.[122]The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[123]The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[124][125]The President of theAssociation for the Advancement of Artificial Intelligencehas commissioned a study to look at this issue.[126]They point to programs like the Language Acquisition Device which can emulate human interaction. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[127]The US Navy has funded a report which indicates that asmilitary robotsbecome more complex, there should be greater attention to implications of their ability to make autonomous decisions.[128][125]Some researchers state thatautonomous robotsmight be more humane, as they could make decisions more effectively.[129]In 2024, theDefense Advanced Research Projects Agencyfunded a program,Autonomy Standards and Ideals with Military Operational Values(ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.[130][131] Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[132]From aconsequentialistview, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a setmoralframework that the AI cannot override.[133] There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of arobot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to developautonomous drone weapons, paralleling similar announcements by Russia and South Korea[134]respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons,Stephen HawkingandMax Tegmarksigned a "Future of Life" petition[135]to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[136] "If any major military power pushes ahead with the AI weapon development, a globalarms raceis virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become theKalashnikovsof tomorrow", says the petition, which includesSkypeco-founderJaan Tallinnand MIT professor of linguisticsNoam Chomskyas additional supporters against AI weaponry.[137] Physicist and Astronomer RoyalSir Martin Reeshas warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own."Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created theCentre for the Study of Existential Riskat Cambridge University in the hope of avoiding this threat to human existence.[136] Regarding the potential for smarter-than-human systems to be employed militarily, theOpen Philanthropy Projectwrites that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as theMachine Intelligence Research Institute(MIRI) and theFuture of Humanity Institute(FHI), and there seems to have been less analysis and debate regarding them".[138] Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects.[139]: 91Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.[139]: 91 Asummitwas held in 2023 in the Hague on the issue of using AI responsibly in the military domain.[140] Vernor Vinge, among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "the Singularity"[141]and is the central point of discussion in the philosophy ofSingularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large. Many researchers have argued that, through anintelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[142]In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent bookSuperintelligence: Paths, Dangers, Strategies, philosopherNick Bostromargues that artificial intelligence has the capability to bring about human extinction. He claims that anartificial superintelligencewould be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolledunintended consequencescould arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[143][144] However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could helphumans enhance themselves.[145] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According toEliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[146]AI researchers such asStuart J. Russell,[147]Bill Hibbard,[111]Roman Yampolskiy,[148]Shannon Vallor,[149]Steven Umbrello[150]andLuciano Floridi[151]have proposed design strategies for developing beneficial machines. To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples includeNvidia's[152]LlamaGuard, which focuses on improving thesafetyandalignmentof large AI models,[153]andPreamble's customizable guardrail platform.[154]These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, includingprompt injectionattacks, by embedding ethical guidelines into the functionality of AI models. Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated.[154]Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,[155]or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.[156]These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.[157] There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal. Amazon,Google,Facebook,IBM, andMicrosofthave established anon-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[158] TheIEEEput together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE'sEthics of Autonomous Systemsinitiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values. Traditionally,governmenthas been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government andnon-government organizationsto ensure AI is ethically applied. AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.[159] Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to theEnlightenment:Leibnizalready poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[185]and so doesDescartes, who describes what could be considered an early version of theTuring test.[186] Theromanticperiod has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously inMary Shelley'sFrankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction:R.U.R – Rossum's Universal Robots,Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor,robota)[187]but was also an international success after it premiered in 1921.George Bernard Shaw's playBack to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans;Fritz Lang's 1927 filmMetropolisshows anandroidleading the uprising of the exploited masses against the oppressive regime of atechnocraticsociety. In the 1950s,Isaac Asimovconsidered the issue of how to control machines inI, Robot. At the insistence of his editorJohn W. Campbell Jr., he proposed theThree Laws of Roboticsto govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.[188]His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[189]More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[190]A panel convened by theUnited Kingdomin 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[191] Eliezer Yudkowsky, from theMachine Intelligence Research Institutesuggested in 2004 a need to study how to build a "Friendly AI", meaning that there should also be efforts to make AI intrinsically friendly and humane.[192] In 2009, academics and technical experts attended a conference organized by theAssociation for the Advancement of Artificial Intelligenceto discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.[193]They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[141] Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale ofLausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[194] The role of fiction with regards to AI ethics has been a complex one.[195]One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at theInstitut de Robòtica i Informàtica Industrial(Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[196]in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees. While ethical questions linked to AI have been featured in science fiction literature andfeature filmsfor decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish seriesReal Humans(2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology seriesBlack Mirror(2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French seriesOsmosis(2020) and British seriesThe Onedeal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix seriesLove, Death+Robotshave imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.[197] The movieThe Thirteenth Floorsuggests a future wheresimulated worldswith sentient inhabitants are created by computergame consolesfor the purpose of entertainment. The movieThe Matrixsuggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmostspeciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in theEmergency Medical HologramofStarship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator,Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The moviesBicentennial ManandA.I.deal with the possibility of sentient robots that could love.I, Robotexplored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[198] The ethics of artificial intelligence is one of several core themes in BioWare'sMass Effectseries of games.[199]It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scaleneural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story. Detroit: Become Humanis one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.[200] Over time, debates have tended to focus less and less onpossibilityand more ondesirability,[201]as emphasized in the"Cosmist" and "Terran" debatesinitiated byHugo de GarisandKevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species. Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[202]
https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
AMexican standoffis a confrontation where no strategy exists that allows any party to achieve victory.[1][2]Anyone initiating aggression might trigger their own demise. At the same time, the parties are unable to extract themselves from the situation without either negotiating a truce or suffering a loss, maintaining strategic tension until one of those three potential organic outcomes occurs or some outside force intervenes. The termMexican standoffwas originally used in the context of using firearms and it still commonly implies a situation in which the parties face some form of threat from one another; the standoffs can span from someone holding a phone threatening to call the police being held in check by ablackmailer, to global confrontations. The Mexican standoff as an armed stalemate is a recurringcinematic trope. Sources claim the reference is to theMexican–American Waror post-war Mexicanbanditsin the 19th century.[3] The earliest known use of the phrase in print was on 19 March 1876, in a short story about Mexico, featuring the line:[4] "Go-!" said he sternly then. "We will call it a stand-off, a Mexican stand-off, you lose your money, but you save your life!" In popular culture, the termMexican standoffreferences confrontations in which neither opponent appears to have a measurable advantage. Historically, commentators have used the term to reference theSoviet Union–United Statesnuclear confrontation during theCold War, specifically theCuban Missile Crisisof 1962. The key element that makes such situationsMexican standoffsis the perceived equality of power exercised amongst the involved parties.[3][unreliable source?]The inability of any particular party to advance its position safely is a condition common amongst all standoffs; in a "Mexican standoff", however, there is an additional disadvantage: no party has a safe way towithdrawfrom its position, thus making the standoff effectively permanent. Theclichéof a Mexican standoff where each party is threatening another with a gun is now considered a movietrope, stemming from its frequent use as aplot devicein cinema. A notable example is inSergio Leone's 1966 WesternThe Good, the Bad and the Ugly, where the characters representing each played byClint Eastwood,Lee Van CleefandEli Wallach, face each other in a showdown.[6][7] DirectorJohn Woo, considered a major influence on theaction filmgenre, is known for his use of the "Mexican standoff" trope.[8]DirectorQuentin Tarantino(who has cited Woo as an influence) has featured Mexican standoff scenes in films includingInglourious Basterds(the tavern scene features multiple Mexican standoffs including meta-discussion) and bothReservoir DogsandPulp Fiction, which depicts a standoff among four characters in the climactic scene.[9] Writer/DirectorFrancis Galluppi's 2023 movieThe Last Stop in Yuma Countyfeatures a mexican standoff scene in the diner.[citation needed]
https://en.wikipedia.org/wiki/Mexican_standoff
Exponential Tilting(ET),Exponential Twisting, orExponential Change of Measure(ECM) is a distribution shifting technique used in many parts of mathematics. The different exponential tiltings of a random variableX{\displaystyle X}is known as thenatural exponential familyofX{\displaystyle X}. Exponential Tilting is used inMonte Carlo Estimationfor rare-event simulation, andrejectionandimportance samplingin particular. In mathematical finance[1]Exponential Tilting is also known asEsscher tilting(or theEsscher transform), and often combined with indirectEdgeworth approximationand is used in such contexts as insurance futures pricing.[2] The earliest formalization of Exponential Tilting is often attributed toEsscher[3]with its use in importance sampling being attributed toDavid Siegmund.[4] Given a random variableX{\displaystyle X}with probability distributionP{\displaystyle \mathbb {P} }, densityf{\displaystyle f}, andmoment generating function(MGF)MX(θ)=E[eθX]<∞{\displaystyle M_{X}(\theta )=\mathbb {E} [e^{\theta X}]<\infty }, the exponentially tilted measurePθ{\displaystyle \mathbb {P} _{\theta }}is defined as follows: whereκ(θ){\displaystyle \kappa (\theta )}is thecumulant generating function(CGF) defined as We call theθ{\displaystyle \theta }-tilteddensityofX{\displaystyle X}. It satisfiesfθ(x)∝eθxf(x){\displaystyle f_{\theta }(x)\propto e^{\theta x}f(x)}. The exponential tilting of a random vectorX{\displaystyle X}has an analogous definition: whereκ(θ)=log⁡E[exp⁡{θTX}]{\displaystyle \kappa (\theta )=\log \mathbb {E} [\exp\{\theta ^{T}X\}]}. The exponentially tilted measure in many cases has the same parametric form as that ofX{\displaystyle X}. One-dimensional examples include the normal distribution, the exponential distribution, the binomial distribution and the Poisson distribution. For example, in the case of the normal distribution,N(μ,σ2){\displaystyle N(\mu ,\sigma ^{2})}the tilted densityfθ(x){\displaystyle f_{\theta }(x)}is theN(μ+θσ2,σ2){\displaystyle N(\mu +\theta \sigma ^{2},\sigma ^{2})}density. The table below provides more examples of tilted densities. For some distributions, however, the exponentially tilted distribution does not belong to the same parametric family asf{\displaystyle f}. An example of this is thePareto distributionwithf(x)=α/(1+x)α,x>0{\displaystyle f(x)=\alpha /(1+x)^{\alpha },x>0}, wherefθ(x){\displaystyle f_{\theta }(x)}is well defined forθ<0{\displaystyle \theta <0}but is not a standard distribution. In such examples, the random variable generation may not always be straightforward.[7] In statistical mechanics, the energy of a system in equilibrium with a heat bath has theBoltzmann distribution:P(E∈dE)∝e−βEdE{\displaystyle \mathbb {P} (E\in dE)\propto e^{-\beta E}dE}, whereβ{\displaystyle \beta }is theinverse temperature. Exponential tilting then corresponds to changing the temperature:Pθ(E∈dE)∝e−(β−θ)EdE{\displaystyle \mathbb {P} _{\theta }(E\in dE)\propto e^{-(\beta -\theta )E}dE}. Similarly, the energy and particle number of a system in equilibrium with a heat and particle bath has thegrand canonical distribution:P((N,E)∈(dN,dE))∝eβμN−βEdNdE{\displaystyle \mathbb {P} ((N,E)\in (dN,dE))\propto e^{\beta \mu N-\beta E}dNdE}, whereμ{\displaystyle \mu }is thechemical potential. Exponential tilting then corresponds to changing both the temperature and the chemical potential. In many cases, the tilted distribution belongs to the same parametric family as the original. This is particularly true when the original density belongs to theexponential familyof distribution. This simplifies random variable generation during Monte-Carlo simulations. Exponential tilting may still be useful if this is not the case, though normalization must be possible and additional sampling algorithms may be needed. In addition, there exists a simple relationship between the original and tilted CGF, We can see this by observing that Thus, Clearly, this relationship allows for easy calculation of the CGF of the tilted distribution and thus the distributions moments. Moreover, it results in a simple form of the likelihood ratio. Specifically, The exponential tilting ofX{\displaystyle X}, assuming it exists, supplies a family of distributions that can be used as proposal distributions foracceptance-rejection samplingor importance distributions forimportance sampling. One common application is sampling from a distribution conditional on a sub-region of the domain, i.e.X|X∈A{\displaystyle X|X\in A}. With an appropriate choice ofθ{\displaystyle \theta }, sampling fromPθ{\displaystyle \mathbb {P} _{\theta }}can meaningfully reduce the required amount of sampling or the variance of an estimator. Thesaddlepoint approximation methodis a density approximation methodology often used for the distribution of sums and averages of independent, identically distributed random variables that employsEdgeworth series, but which generally performs better at extreme values. From the definition of the natural exponential family, it follows that Applying theEdgeworth expansionforfθ(x¯){\displaystyle f_{\theta }({\bar {x}})}, we have whereψ(z){\displaystyle \psi (z)}is the standard normal density of andhn{\displaystyle h_{n}}are thehermite polynomials. When considering values ofx¯{\displaystyle {\bar {x}}}progressively farther from the center of the distribution,|z|→∞{\displaystyle |z|\rightarrow \infty }and thehn(z){\displaystyle h_{n}(z)}terms become unbounded. However, for each value ofx¯{\displaystyle {\bar {x}}}, we can chooseθ{\displaystyle \theta }such that This value ofθ{\displaystyle \theta }is referred to as the saddle-point, and the above expansion is always evaluated at the expectation of the tilted distribution. This choice ofθ{\displaystyle \theta }leads to the final representation of the approximation given by Using the tilted distributionPθ{\displaystyle \mathbb {P} _{\theta }}as the proposal, therejection samplingalgorithm prescribes sampling fromfθ(x){\displaystyle f_{\theta }(x)}and accepting with probability where That is, a uniformly distributed random variablep∼Unif(0,1){\displaystyle p\sim {\mbox{Unif}}(0,1)}is generated, and the sample fromfθ(x){\displaystyle f_{\theta }(x)}is accepted if Applying the exponentially tilted distribution as the importance distribution yields the equation where is the likelihood function. So, one samples fromfθ{\displaystyle f_{\theta }}to estimate the probability under the importance distributionP(dX){\displaystyle \mathbb {P} (dX)}and then multiplies it by the likelihood ratio. Moreover, we have the variance given by Assume independent and identically distributed{Xi}{\displaystyle \{X_{i}\}}such thatκ(θ)<∞{\displaystyle \kappa (\theta )<\infty }. In order to estimateP(X1+⋯+Xn>c){\displaystyle \mathbb {P} (X_{1}+\cdots +X_{n}>c)}, we can employ importance sampling by taking The constantc{\displaystyle c}can be rewritten asna{\displaystyle na}for some other constanta{\displaystyle a}. Then, whereθa{\displaystyle \theta _{a}}denotes theθ{\displaystyle \theta }defined by the saddle-point equation Given the tilting of a normal R.V., it is intuitive that the exponential tilting ofXt{\displaystyle X_{t}}, aBrownian motionwith driftμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}, is a Brownian motion with driftμ+θσ2{\displaystyle \mu +\theta \sigma ^{2}}and varianceσ2{\displaystyle \sigma ^{2}}. Thus, any Brownian motion with drift underP{\displaystyle \mathbb {P} }can be thought of as a Brownian motion without drift underPθ∗{\displaystyle \mathbb {P} _{\theta ^{*}}}. To observe this, consider the processXt=Bt+μt{\displaystyle X_{t}=B_{t}+\mu _{t}}.f(Xt)=fθ∗(Xt)dPdPθ∗=f(Bt)exp⁡{μBT−12μ2T}{\displaystyle f(X_{t})=f_{\theta ^{*}}(X_{t}){\frac {d\mathbb {P} }{d\mathbb {P} _{\theta ^{*}}}}=f(B_{t})\exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}}. The likelihood ratio term,exp⁡{μBT−12μ2T}{\displaystyle \exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}}, is amartingaleand commonly denotedMT{\displaystyle M_{T}}. Thus, a Brownian motion with drift process (as well as many other continuous processes adapted to the Brownian filtration) is aPθ∗{\displaystyle \mathbb {P} _{\theta ^{*}}}-martingale.[10][11] The above leads to the alternate representation of thestochastic differential equationdX(t)=μ(t)dt+σ(t)dB(t){\displaystyle dX(t)=\mu (t)dt+\sigma (t)dB(t)}:dXθ(t)=μθ(t)dt+σ(t)dB(t){\displaystyle dX_{\theta }(t)=\mu _{\theta }(t)dt+\sigma (t)dB(t)}, whereμθ(t){\displaystyle \mu _{\theta }(t)}=μ(t)+θσ(t){\displaystyle \mu (t)+\theta \sigma (t)}. Girsanov's Formula states the likelihood ratiodPdPθ=exp⁡{−∫0Tμθ(t)−μ(t)σ2(t)dB(t)+∫0T(σ2(t)2)dt}{\displaystyle {\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}=\exp\{-\int \limits _{0}^{T}{\frac {\mu _{\theta }(t)-\mu (t)}{\sigma ^{2}(t)}}dB(t)+\int \limits _{0}^{T}({\frac {\sigma ^{2}(t)}{2}})dt\}}. Therefore, Girsanov's Formula can be used to implement importance sampling for certain SDEs. Tilting can also be useful for simulating a processX(t){\displaystyle X(t)}via rejection sampling of the SDEdX(t)=μ(X(t))dt+dB(t){\displaystyle dX(t)=\mu (X(t))dt+dB(t)}. We may focus on the SDE since we know thatX(t){\displaystyle X(t)}can be written∫0tdX(t)+X(0){\displaystyle \int \limits _{0}^{t}dX(t)+X(0)}. As previously stated, a Brownian motion with drift can be tilted to a Brownian motion without drift. Therefore, we choosePproposal=Pθ∗{\displaystyle \mathbb {P} _{proposal}=\mathbb {P} _{\theta ^{*}}}. The likelihood ratiodPθ∗dP(dX(s):0≤s≤t)={\displaystyle {\frac {d\mathbb {P} _{\theta ^{*}}}{d\mathbb {P} }}(dX(s):0\leq s\leq t)=}∏τ≥texp⁡{μ(X(τ))dX(τ)−μ(X(τ))22}dt=exp⁡{∫0tμ(X(τ))dX(τ)−∫0tμ(X(s))22}dt{\displaystyle \prod \limits _{\tau \geq t}\exp\{\mu (X(\tau ))dX(\tau )-{\frac {\mu (X(\tau ))^{2}}{2}}\}dt=\exp\{\int \limits _{0}^{t}\mu (X(\tau ))dX(\tau )-\int \limits _{0}^{t}{\frac {\mu (X(s))^{2}}{2}}\}dt}. This likelihood ratio will be denotedM(t){\displaystyle M(t)}. To ensure this is a true likelihood ratio, it must be shown thatE[M(t)]=1{\displaystyle \mathbb {E} [M(t)]=1}. Assuming this condition holds, it can be shown thatfX(t)(y)=fX(t)θ∗(y)Eθ∗[M(t)|X(t)=y]{\displaystyle f_{X(t)}(y)=f_{X(t)}^{\theta ^{*}}(y)\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]}. So, rejection sampling prescribes that one samples from a standard Brownian motion and accept with probabilityfX(t)(y)fX(t)θ∗(y)1c=1cEθ∗[M(t)|X(t)=y]{\displaystyle {\frac {f_{X(t)}(y)}{f_{X(t)}^{\theta ^{*}}(y)}}{\frac {1}{c}}={\frac {1}{c}}\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]}. Assume i.i.d. X's with light tailed distribution andE[X]>0{\displaystyle \mathbb {E} [X]>0}. In order to estimateψ(c)=P(τ(c)<∞){\displaystyle \psi (c)=\mathbb {P} (\tau (c)<\infty )}whereτ(c)=inf{t:∑i=1tXi>c}{\displaystyle \tau (c)=\inf\{t:\sum \limits _{i=1}^{t}X_{i}>c\}}, whenc{\displaystyle c}is large and henceψ(c){\displaystyle \psi (c)}small, the algorithm uses exponential tilting to derive the importance distribution. The algorithm is used in many aspects, such as sequential tests,[12]G/G/1 queuewaiting times, andψ{\displaystyle \psi }is used as the probability of ultimate ruin inruin theory. In this context, it is logical to ensure thatPθ(τ(c)<∞)=1{\displaystyle \mathbb {P} _{\theta }(\tau (c)<\infty )=1}. The criterionθ>θ0{\displaystyle \theta >\theta _{0}}, whereθ0{\displaystyle \theta _{0}}is s.t.κ′(θ0)=0{\displaystyle \kappa '(\theta _{0})=0}achieves this. Siegmund's algorithm usesθ=θ∗{\displaystyle \theta =\theta ^{*}}, if it exists, whereθ∗{\displaystyle \theta ^{*}}is defined in the following way:κ(θ∗)=0{\displaystyle \kappa (\theta ^{*})=0}. It has been shown thatθ∗{\displaystyle \theta ^{*}}is the only tilting parameter producing bounded relative error (limsupx→∞VarIA(x)PA(x)2<∞{\displaystyle {\underset {x\rightarrow \infty }{\lim \sup }}{\frac {Var\mathbb {I} _{A(x)}}{\mathbb {P} A(x)^{2}}}<\infty }).[13] We can only see the input and output of a black box, without knowing its structure. The algorithm is to use only minimal information on its structure. When we generate random numbers, the output may not be within the same common parametric class, such as normal or exponential distributions. An automated way may be used to perform ECM. LetX1,X2,...{\displaystyle X_{1},X_{2},...}be i.i.d. r.v.’s with distributionG{\displaystyle G}; for simplicity we assumeX≥0{\displaystyle X\geq 0}. DefineFn=σ(X1,...,Xn,U1,...,Un){\displaystyle {\mathfrak {F}}_{n}=\sigma (X_{1},...,X_{n},U_{1},...,U_{n})}, whereU1,U2{\displaystyle U_{1},U_{2}}, . . . are independent (0, 1) uniforms. A randomized stopping time forX1,X2{\displaystyle X_{1},X_{2}}, . . . is then a stopping time w.r.t. the filtration{Fn}{\displaystyle \{{\mathfrak {F}}_{n}\}}, . . . Let furtherG{\displaystyle {\mathfrak {G}}}be a class of distributionsG{\displaystyle G}on[0,∞){\displaystyle [0,\infty )}withkG=∫0∞eθxG(dx)<∞{\displaystyle k_{G}=\int _{0}^{\infty }e^{\theta x}G(dx)<\infty }and defineGθ{\displaystyle G_{\theta }}bydGθdG(x)=eθx−kG{\displaystyle {\frac {dG_{\theta }}{dG(x)}}=e^{\theta x-k_{G}}}. We define a black-box algorithm for ECM for the givenθ{\displaystyle \theta }and the given classG{\displaystyle {\mathfrak {G}}}of distributions as a pair of a randomized stopping timeτ{\displaystyle \tau }and anFτ−{\displaystyle {\mathfrak {F}}_{\tau }-}measurable r.v.Z{\displaystyle Z}such thatZ{\displaystyle Z}is distributed according toGθ{\displaystyle G_{\theta }}for anyG∈G{\displaystyle G\in {\mathfrak {G}}}. Formally, we write this asPG(Z<x)=Gθ(x){\displaystyle \mathbb {P} _{G}(Z<x)=G_{\theta }(x)}for allx{\displaystyle x}. In other words, the rules of the game are that the algorithm may use simulated values fromG{\displaystyle G}and additional uniforms to produce an r.v. fromGθ{\displaystyle G_{\theta }}.[14]
https://en.wikipedia.org/wiki/Exponential_tilting
Incontinuum mechanics, the most commonly used measure ofstressis theCauchy stress tensor, often called simplythestress tensor or "true stress". However, several alternative measures of stress can be defined:[1][2][3] Consider the situation shown in the following figure. The following definitions use the notations shown in the figure. In the reference configurationΩ0{\displaystyle \Omega _{0}}, the outward normal to a surface elementdΓ0{\displaystyle d\Gamma _{0}}isN≡n0{\displaystyle \mathbf {N} \equiv \mathbf {n} _{0}}and the traction acting on that surface (assuming it deforms like a generic vector belonging to the deformation) ist0{\displaystyle \mathbf {t} _{0}}leading to a force vectordf0{\displaystyle d\mathbf {f} _{0}}. In the deformed configurationΩ{\displaystyle \Omega }, the surface element changes todΓ{\displaystyle d\Gamma }with outward normaln{\displaystyle \mathbf {n} }and traction vectort{\displaystyle \mathbf {t} }leading to a forcedf{\displaystyle d\mathbf {f} }. Note that this surface can either be a hypothetical cut inside the body or an actual surface. The quantityF{\displaystyle {\boldsymbol {F}}}is thedeformation gradient tensor,J{\displaystyle J}is its determinant. The Cauchy stress (or true stress) is a measure of the force acting on an element of area in the deformed configuration. This tensor is symmetric and is defined via or wheret{\displaystyle \mathbf {t} }is the traction andn{\displaystyle \mathbf {n} }is the normal to the surface on which the traction acts. The quantity, is called theKirchhoff stress tensor, withJ{\displaystyle J}the determinant ofF{\displaystyle {\boldsymbol {F}}}. It is used widely in numerical algorithms in metal plasticity (where there is no change in volume during plastic deformation). It can be calledweighted Cauchy stress tensoras well. The nominal stressN=PT{\displaystyle {\boldsymbol {N}}={\boldsymbol {P}}^{T}}is the transpose of the first Piola–Kirchhoff stress (PK1 stress, also called engineering stress)P{\displaystyle {\boldsymbol {P}}}and is defined via or This stress is unsymmetric and is a two-point tensor like the deformation gradient.The asymmetry derives from the fact that, as a tensor, it has one index attached to the reference configuration and one to the deformed configuration.[4] If wepull backdf{\displaystyle d\mathbf {f} }to the reference configuration we obtain the traction acting on that surface before the deformationdf0{\displaystyle d\mathbf {f} _{0}}assuming it behaves like a generic vector belonging to the deformation. In particular we have or, The PK2 stress (S{\displaystyle {\boldsymbol {S}}}) is symmetric and is defined via the relation Therefore, The Biot stress is useful because it isenergy conjugateto theright stretch tensorU{\displaystyle {\boldsymbol {U}}}. The Biot stress is defined as the symmetric part of the tensorPT⋅R{\displaystyle {\boldsymbol {P}}^{T}\cdot {\boldsymbol {R}}}whereR{\displaystyle {\boldsymbol {R}}}is the rotation tensor obtained from apolar decompositionof the deformation gradient. Therefore, the Biot stress tensor is defined as The Biot stress is also called the Jaumann stress. The quantityT{\displaystyle {\boldsymbol {T}}}does not have any physical interpretation. However, the unsymmetrized Biot stress has the interpretation FromNanson's formularelating areas in the reference and deformed configurations: Now, Hence, or, or, In index notation, Therefore, Note thatN{\displaystyle {\boldsymbol {N}}}andP{\displaystyle {\boldsymbol {P}}}are (generally) not symmetric becauseF{\displaystyle {\boldsymbol {F}}}is (generally) not symmetric. Recall that and Therefore, or (using the symmetry ofS{\displaystyle {\boldsymbol {S}}}), In index notation, Alternatively, we can write Recall that In terms of the 2nd PK stress, we have Therefore, In index notation, Since the Cauchy stress (and hence the Kirchhoff stress) is symmetric, the 2nd PK stress is also symmetric. Alternatively, we can write or, Clearly, from definition of thepush-forwardandpull-backoperations, we have and Therefore,S{\displaystyle {\boldsymbol {S}}}is the pull back ofτ{\displaystyle {\boldsymbol {\tau }}}byF{\displaystyle {\boldsymbol {F}}}andτ{\displaystyle {\boldsymbol {\tau }}}is the push forward ofS{\displaystyle {\boldsymbol {S}}}. Key:J=det(F),C=FTF=U2,F=RU,RT=R−1,{\displaystyle J=\det \left({\boldsymbol {F}}\right),\quad {\boldsymbol {C}}={\boldsymbol {F}}^{T}{\boldsymbol {F}}={\boldsymbol {U}}^{2},\quad {\boldsymbol {F}}={\boldsymbol {R}}{\boldsymbol {U}},\quad {\boldsymbol {R}}^{T}={\boldsymbol {R}}^{-1},}P=JσF−T,τ=Jσ,S=JF−1σF−T,T=RTP,M=CS{\displaystyle {\boldsymbol {P}}=J{\boldsymbol {\sigma }}{\boldsymbol {F}}^{-T},\quad {\boldsymbol {\tau }}=J{\boldsymbol {\sigma }},\quad {\boldsymbol {S}}=J{\boldsymbol {F}}^{-1}{\boldsymbol {\sigma }}{\boldsymbol {F}}^{-T},\quad {\boldsymbol {T}}={\boldsymbol {R}}^{T}{\boldsymbol {P}},\quad {\boldsymbol {M}}={\boldsymbol {C}}{\boldsymbol {S}}}
https://en.wikipedia.org/wiki/Stress_measures
Bayesian approaches to brain functioninvestigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed byBayesian statistics.[1][2]This term is used inbehavioural sciencesandneuroscienceand studies associated with this term often strive to explain thebrain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internalprobabilistic modelsthat are updated byneural processingof sensory information using methods approximating those ofBayesian probability.[3][4] This field of study has its historical roots in numerous disciplines includingmachine learning,experimental psychologyandBayesian statistics. As early as the 1860s, with the work ofHermann Helmholtzin experimental psychology, the brain's ability to extract perceptual information from sensory data was modeled in terms of probabilistic estimation.[5][6]The basic idea is that the nervous system needs to organize sensory data into an accurateinternal modelof the outside world. Bayesian probability has been developed by many important contributors.Pierre-Simon Laplace,Thomas Bayes,Harold Jeffreys,Richard CoxandEdwin Jaynesdeveloped mathematical techniques and procedures for treating probability as the degree of plausibility that could be assigned to a given supposition or hypothesis based on the available evidence.[7]In 1988Edwin Jaynespresented a framework for using Bayesian Probability to model mental processes.[8]It was thus realized early on that the Bayesian statistical framework holds the potential to lead to insights into the function of the nervous system. This idea was taken up in research onunsupervised learning, in particular the Analysis by Synthesis approach, branches ofmachine learning.[9][10]In 1983Geoffrey Hintonand colleagues proposed the brain could be seen as a machine making decisions based on the uncertainties of the outside world.[11]During the 1990s researchers includingPeter Dayan, Geoffrey Hinton and Richard Zemel proposed that the brain represents knowledge of the world in terms of probabilities and made specific proposals for tractable neural processes that could manifest such aHelmholtz Machine.[12][13][14] A wide range of studies interpret the results of psychophysical experiments in light of Bayesian perceptual models. Many aspects of human perceptual and motor behavior can be modeled with Bayesian statistics. This approach, with its emphasis on behavioral outcomes as the ultimate expressions of neural information processing, is also known for modeling sensory and motor decisions using Bayesian decision theory. Examples are the work ofLandy,[15][16]Jacobs,[17][18]Jordan, Knill,[19][20]Kording and Wolpert,[21][22]and Goldreich.[23][24][25] Many theoretical studies ask how the nervous system could implement Bayesian algorithms. Examples are the work of Pouget, Zemel, Deneve, Latham, Hinton and Dayan. George andHawkinspublished a paper that establishes a model of cortical information processing calledhierarchical temporal memorythat is based on Bayesian network ofMarkov chains. They further map this mathematical model to the existing knowledge about the architecture of cortex and show how neurons could recognize patterns by hierarchical Bayesian inference.[26] A number of recent electrophysiological studies focus on the representation of probabilities in the nervous system. Examples are the work ofShadlenand Schultz. Predictive codingis a neurobiologically plausible scheme for inferring the causes of sensory input based on minimizing prediction error.[27]These schemes are related formally toKalman filteringand other Bayesian update schemes. During the 1990s some researchers such asGeoffrey HintonandKarl Fristonbegan examining the concept offree energyas a calculably tractable measure of the discrepancy between actual features of the world and representations of those features captured by neural network models.[28]A synthesis has been attempted recently[29]byKarl Friston, in which the Bayesian brain emerges from a generalprinciple of free energy minimisation.[30]In this framework, both action and perception are seen as a consequence of suppressing free-energy, leading to perceptual[31]and active inference[32]and a more embodied (enactive) view of the Bayesian brain. Usingvariational Bayesianmethods, it can be shown howinternal modelsof the world are updated by sensory information to minimize free energy or the discrepancy between sensory input and predictions of that input. This can be cast (in neurobiologically plausible terms) as predictive coding or, more generally, Bayesian filtering. According to Friston:[33] "The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment."[33] This area of research was summarized in terms understandable by the layperson in a 2008 article inNew Scientistthat offered a unifying theory of brain function.[34]Friston makes the following claims about the explanatory power of the theory: "This model of brain function can explain a wide range of anatomical and physiological aspects of brain systems; for example, the hierarchical deployment of cortical areas, recurrent architectures using forward and backward connections and functional asymmetries in these connections. In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology it accounts for classical and extra-classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena like repetition suppression,mismatch negativityand the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, e.g.,priming, and global precedence."[33] "It is fairly easy to show that both perceptual inference and learning rest on a minimisation of free energy or suppression of prediction error."[33]
https://en.wikipedia.org/wiki/Bayesian_approaches_to_brain_function
Financial planning and analysis(FP&A), in accounting and business, refers to the various integratedplanning,analysis, andmodelingactivities aimedat supportingfinancial decisioningand managementin the wider organization.[1][2][3][4][5]SeeFinancial analyst § Financial planning and analysisfor outline, and aside articles for further detail. In larger companies, "FP&A" will run as a dedicated area or team, under an "FP&A Manager" reporting to theCFO.[6] FP&A is distinct fromfinancial managementand (management)accountingin that it is oriented, additionally, towardsbusiness performance management, and, further, encompasses bothqualitativeandquantitative analysis. This positioning allows management—in partnershipwith FP&A—to preemptively address issues relating, e.g., tocustomersandoperations, as well as themore traditional business-finance problems. Relatedly, althoughBudgetingandForecastingare typically done at specific times in the year—and correspondingly cover specific time periods—FP&A, by contrast, has a wider brief re both horizon and content. "FP&A Analysts" thus play an important role in every (major) decision by the company—ranging in scope from changes in headcount tomergers and acquisitions.[1] Over the years, FP&A's role has evolved, facilitated by technological advances.[4]During its early years, 1960s to 1980s, FP&A focused on more traditional forecasting andfinancial analysis; relying onspreadsheets, mainlyExcel, but in earlier years,Lotus 1-2-3(andVisiCalc). From the 1980s to the early 2000s, the scope shifted torisk,scenario, andsensitivity analysis; utilizingbusiness intelligenceandfinancial modelingsoftware, such asCognos,Hyperion, andBusinessObjects. From 2000s to present, the emphasis is increasingly onpredictive analytics; tools includecloud-based platformsandanalytics packages, i.e.Amazon Web ServicesandMicrosoft Azure, andSAS,KNIME,[7]R, andPython.[8]More recently,[9]specialized software— which increasingly[10]employsAI/ML— is availablecommercially. Products here are fromJedox,Anaplan,Workday,Hyperion,Wolters Kluwer,Datarails,Workivaand others.
https://en.wikipedia.org/wiki/FP%26A
Digital footprintordigital shadowrefers to one's unique set oftraceabledigital activities, actions, contributions, and communications manifested on theInternetordigital devices.[1][2][3][4]Digital footprints can be classified as either passive or active. Passive footprints consist of a user's web-browsing activity and information stored ascookies. Active footprints are intentionally created by users to share information on websites orsocial media.[5]While the term usually applies to a person, a digital footprint can also refer to a business, organization or corporation.[6] The use of a digital footprint has both positive and negative consequences. On one side, it is the subject of manyprivacy issues.[7]For example, without an individual's authorization, strangers can piece together information about that individual by only usingsearch engines. Socialinequalitiesare exacerbated by the limited access afforded tomarginalized communities.[8]Corporations are also able to produce customized ads based on browsing history. On the other hand, others can reap the benefits by profiting off their digital footprint as social mediainfluencers. Furthermore, employers use a candidate's digital footprint foronline vettingand assessing fit due to its reduced cost and accessibility.[citation needed]Between two equal candidates, a candidate with a positive digital footprint may have an advantage. As technology usage becomes more widespread, even children generate larger digital footprints with potential positive and negative consequences such as college admissions. Media and information literacy frameworks and educational efforts promote awareness of digital footprints as part of a citizen's digital privacy.[9]Since it is hard not to have a digital footprint, it is in one's best interest to create a positive one. Passive digital footprints are a data trail that an individual involuntarily leaves online.[10][11]They can be stored in various ways depending on the situation. A footprint may be stored in an online database as a "hit" in an online environment. The footprint may track the user'sIP address, when it was created, where it came from, and the footprint later being analyzed. In anofflineenvironment,administratorscan access and view the machine's actions without seeing who performed them. Examples of passive digital footprints are apps that usegeolocations, websites that download cookies onto your appliance, orbrowser history. Although passive digital footprints are inevitable, they can be lessened by deleting old accounts, usingprivacy settings(public or private accounts), and occasionally online searching yourself to see the information left behind.[12] Active digital footprints are deliberate, as they are posted or shared information willingly. They can also be stored in a variety of ways depending on the situation. A digital footprint can be stored when a userlogsinto a site and makes apostor change; the registered name is connected to the edit in an online environment. Examples of active digital footprints include social media posts, video or image uploads, or changes to various websites.[11] Digital footprints are not adigital identityorpassport, but the content andmetadatacollected impactsinternet privacy,trust,security, digitalreputation, andrecommendation. As the digital world expands and integrates with more aspects of life, ownership and rights concerning data become increasingly important. Digital footprints are controversial in that privacy and openness compete.[13]Scott McNealy, CEO ofSun Microsystems, said in 1999Get Over Itwhen referring to privacy on the Internet.[14]The quote later became a commonly used phrase in discussing private data and what companies do with it.[15]Digital footprints are a privacy concern because they are a set of traceable actions, contributions, and ideas shared by users. It can be tracked and can allow internet users to learn about human actions.[16] Interested parties use Internet footprints for several reasons; includingcyber-vetting,[17]where interviewers could research applicants based on their online activities. Internet footprints are also used by law enforcement agencies to provide information unavailable otherwise due to a lack ofprobable cause.[18]Also, digital footprints are used by marketers to find what products a user is interested in or to inspire ones' interest in a particular product based on similar interests.[19] Social networking systemsmay record the activities of individuals, with data becoming alife stream. Suchsocial mediausage and roaming services allow digital tracing data to include individual interests, social groups, behaviors, and location. Such data is gathered from sensors within devices and collected and analyzed without user awareness.[20]When many users choose to share personal information about themselves through social media platforms, including places they visited, timelines and their connections, they are unaware of the privacy setting choices and the security consequences associated with them.[21]Many social media sites, likeFacebook, collect an extensive amount of information that can be used to piece together a user's personality. Information gathered from social media, such as the number of friends a user has, can predict whether or not the user has an introvert or extrovert personality. Moreover, a survey of SNS users revealed that 87% identified their work or education level, 84% identified their full date of birth, 78% identified their location, and 23% listed their phone numbers.[21] While one's digital footprint may infer personal information, such as demographic traits, sexual orientation, race, religious and political views, personality, or intelligence[22]without individuals' knowledge, it also exposes individuals' private psychological spheres into the social sphere.[23]Lifeloggingis an example of an indiscriminate collection of information concerning an individual's life and behavior.[24]There are actions to take to make a digital footprint challenging to track.[25]An example of the usage or interpretation of data trails is through Facebook-influenced creditworthiness ratings,[26]the judicial investigations around German social scientist Andrej Holm,[27]advertisement-junk mails by the American companyOfficeMax[28]or the border incident of Canadian citizen Ellen Richardson.[29] An increasing number of employers are evaluating applicants by their digital footprint through their interaction on social media due to its reduced cost and easy accessibility[30]during the hiring process. By using such resources, employers can gain more insight on candidates beyond their well-scripted interview responses and perfected resumes.[31]Candidates who display poor communication skills, use inappropriate language, or use drugs or alcohol are rated lower.[32]Conversely, a candidate with a professional or family-oriented social media presence receives higher ratings.[33]Employers also assess a candidate through their digital footprint to determine if a candidate is a good cultural fit[34]for their organization.[35]Suppose a candidate upholds an organization's values or shows existing passion for its mission. In that case, the candidate is more likely to integrate within the organization and could accomplish more than the average person. Although these assessments are known not to be accurate predictors of performance or turnover rates,[36]employers still use digital footprints to evaluate their applicants. Thus, job seekers prefer to create a social media presence that would be viewed positively from a professional point of view. In some professions, maintaining a digital footprint is essential. People will search the internet for specific doctors and their reviews. Half of the search results for a particular physician link to third-party rating websites.[37]For this reason, prospective patients may unknowingly choose their physicians based on their digital footprint in addition to online reviews. Furthermore, a generation relies on social media for livelihood asinfluencersby using their digital footprint. These influencers have dedicated fan bases that may be eager to follow recommendations. As a result, marketers pay influencers to promote their products among their followers, since this medium may yield better returns than traditional advertising.[38][39]Consequently, one's career may be reliant on their digital footprint. Generation Alphawill not be the first generation born into the internet world. As such, a child's digital footprint is becoming more significant than ever before and their consequences may be unclear. As a result of parenting enthusiasm, an increasing amount of parents will create social media accounts for their children at a young age, sometimes even before they are born.[40]Parents may post up to 13,000 photos of a child on social media in their celebratory state before their teen years of everyday life or birthday celebrations.[41]Furthermore, these children are predicted to post 70,000 times online on their own by 18.[41]The advent of posting on social media creates many opportunities to gather data from minors. Since an identity's basic components contain a name, birth date, and address, these children are susceptible toidentity theft.[42]While parents may assume that privacy settings may prevent children's photos and data from being exposed, they also have to trust that their followers will not be compromised. Outsiders may take the images to pose as these children's parents or post the content publicly.[43]For example, during theFacebook-Cambridge Analytica data scandal, friends of friends leaked data to data miners. Due to the child's presence on social media, their privacy may be at risk. Some professionals argue that young people entering the workforce should consider the effect of their digital footprint on theirmarketabilityand professionalism.[44]Having a digital footprint may be very good for students, as college admissions staff and potential employers may decide to research into prospective student's and employee's online profiles, leading to an enormous impact on the students' futures.[44]Teens will be set up for more success if they consider the kind of impact they are making and how it can affect their future. Instead, someone who acts apathetic towards the impression they are making online will struggle if they one day choose to attend college or enter into the workforce.[45]Teens who plan to receive ahigher educationwill have their digital footprint reviewed and assessed as a part of theapplicationprocess.[46]Besides, if the teens that have the intention of receiving a higher education are planning to do so with financial help and scholarships, then they need to consider that their digital footprint will be evaluated in the application process to getscholarships.[47] Digital footprints may reinforce existingsocial inequalities. In a conceptual overview of this topic, researchers argue that both actively and passively generated digital footprints represent a new dimension of digital inequality, withmarginalized groupssystematically disadvantaged in terms of online visibility and opportunity.[48]Corporations and governments increasingly rely on algorithms that use digital footprints to automate decisions across areas like employment, credit, and public services, amplifying existing social inequalities.[48]Because marginalized groups often have less extensive or lower-quality digital footprints, they are at greater risk of being misrepresented, excluded, or disadvantaged by these algorithmic processes.[48]Examples of low-quality digital footprints include lack of data on online databases that track credit scores, legal history or medical history.[48]People from higher socio-economic backgrounds are more likely to leave favorable or carefully curated digital footprints than enable accelerated access to critical services, financial assistance, and jobs.[48] An example of digital inequality is access to essentiale-governmentservices. In theUnited Kingdom, individuals lacking a sufficient digital footprint face challenges in verify their identities.[49]This new barriers to services such aspublic housingandhealthcarecreating a “double disadvantage".[49]A double disadvantage compounds existing issues in digital access by excluded from digital life lack both access and the digital reputation required to navigate public systems.[49]Other communities with private access or open access to technology anddigital educationfrom an early age will have greater access to government e-services.[49] The United Nations International Children's Emergency Fund's (UNICEF)State of the World’s Children2017report highlights how digital footprints are linked to broader issues of equity, inclusion, and safety, emphasizing that marginalized communities experience greater risks in digital environments.[50] Mediaandinformation literacy(MIL) encompasses the knowledge and skills necessary to access, evaluate, and create information across different media platforms.[51]Understanding and managing one's digital footprint is increasingly recognized as a core component of MIL. Scholars suggest that digital footprint literacy falls under privacy literacy, which refers to the ability to critically manage and protect personal information in online environments.[52]Studies indicate that disparities in MIL access across countries and socio-demographic groups contribute to uneven abilities to manage digital footprints safely.[51] Organizations likeUNESCOand UNICEF advocate for integrating MIL frameworks into formal education systems as a way to mitigate digital inequalities.[51][53]However, there remains a notable lack of standardized MIL curricula globally, particularly concerning privacy literacy and digital footprint management. In response to these gaps, researchers in 2022 developed the "5Ds of Privacy Literacy" educational framework, which emphasizes teaching students to "define, describe, discern, determine, and decide" appropriate information flows based on context.[9]Grounded insocioculturallearning theory, the 5Ds encourage students to make privacy decisions thoughtfully, rather than simply adhering to universal rules.[9]Sociocultural learning theory means that students learn privacy skills not just by memorizing rules, but by actively engaging with real-world social situations, discussing them with others, and practicing decisions in authentic, contextualized settings. This framework highlights that part of digital footprint literacy includes awareness about how our behaviors are tracked online. Companies can infer demographic attributes such as age, gender, and political orientation without explicit disclosure.[54]This is often done without users' awareness.[54]Educating students about these practices aims to promote critical thinking about personal data trails. Another part of digital footprint literacy is being able to critically assess your own digital footprint. Initiatives likeAustralia's "Best Footprint Forward" program have implemented digital footprint education using real-world examples to teach critical self-assessment of online presence.[55]Similarly, theConnecticut State Department of Educationrecommends incorporatingdigital citizenship,internet safety, and media literacy into K–12 education standards.[56]
https://en.wikipedia.org/wiki/Digital_traces
Bottom-upandtop-downare strategies of composition and decomposition in fields as diverse asinformation processingand ordering knowledge,software,humanisticandscientific theories(seesystemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership. Atop-downapproach (also known asstepwise designandstepwise refinementand in some cases used as a synonym ofdecomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in areverse engineeringfashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entirespecificationis reduced to base elements. A top-down model is often specified with the assistance ofblack boxes, which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realisticallyvalidatethe model. A top-down approach starts with the big picture, then breaks down into smaller segments.[1] Abottom-upapproach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom-up processing is a type ofinformation processingbased on incoming data from the environment to form aperception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to finalcognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. In thesoftware development process, the top-down and bottom-up approaches play a key role. Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought.Re-usability of codeis one of the main benefits of a bottom-up approach.[2][failed verification] Top-down design was promoted in the 1970s byIBMresearchersHarlan MillsandNiklaus Wirth. Mills developedstructured programmingconcepts for practical use and tested them in a 1969 project to automate theNew York Timesmorgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer ofPascal programming language, wrote the influential paperProgram Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such asModulaandOberon(where one could define a module before knowing about the entire program specification), one can infer that top-down programming was not strictly what he promoted. Top-down methods were favored insoftware engineeringuntil the late 1980s,[2][failed verification]andobject-oriented programmingassisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used. Modernsoftware designapproaches usually combine top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. Top-down is a programming style, the mainstay of traditionalprocedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top-down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained. In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building withLego. Engineers call this "piece part design". Parsingis the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of bothnatural languagesandcomputer languages, as in acompiler.Bottom-up parsingis parsing strategy that recognizes the text's lowest-level small details first, before its mid-level structures, and leaves the highest-level overall structure to last.[3]Intop-down parsing, on the other hand, one first looks at the highest level of theparse treeand works down the parse tree by using the rewriting rules of aformal grammar.[4] Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by theForesight Institutein 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usuallymolecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such asSilicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications. A top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order.Micropatterningtechniques, such asphotolithographyandinkjet printingbelong to this category. Vapor treatment can be regarded as a new top-down secondary approaches to engineer nanostructures.[5] Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts ofmolecular self-assemblyand/ormolecular recognition. See alsoSupramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases. These terms are also employed incognitive sciencesincludingneuroscience,cognitive neuroscienceandcognitive psychologyto discuss the flow of information in processing.[6][page needed]Typically,sensoryinput is considered bottom-up, andhigher cognitive processes, which have more information from other sources, are considered top-down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).[2][failed verification] According to college teaching notes written by Charles Ramskov,[who?]Irvin Rock, Neiser, and Richard Gregory claim that the top-down approach involves perception that is an active and constructive process.[7][better source needed]Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."[8] Conversely, psychology defines bottom-up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus.[9][page needed][better source needed][10]Theoretical synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough."[8] Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such asmotorcontrol anddirected attentionare considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as areaV1mostly have bottom-up connections.[8]Other areas, such as thefusiform gyrushave inputs from higher brain areas and are considered to have top-down influence.[11][better source needed] The study ofvisual attentionis an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top-down information. In cognition, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.[12] Studies in task switching and response selection show that there are differences through the two types of processing. Top-down processing primarily focuses on the attention side, such as task repetition.[13][page needed]Bottom-up processing focuses on item-based learning, such as finding the same object over and over again.[13][page needed]Implications for understanding attentional control of response selection in conflict situationsare discussed.[incomprehensible][13][page needed] This also applies to how we[who?]structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our[who?]interface design. But although both top-down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom-up methods to produce usable interfaces .[14][clarification needed] Undergraduate (or bachelor) students are taught the basis of top-down bottom-up processing around their third year in the program.[citation needed]Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom-up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations.[15] Both top-down and bottom-up approaches are used in public health. There are many examples of top-down programs, often run by governments or largeinter-governmental organizations; many of these are disease-or issue-specific, such asHIVcontrol orsmallpoxeradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance,guinea worm eradication, a single-disease international program currently run by theCarter Centerhas involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare. Inecologytop-down control refers to when a top predator controls the structure or population dynamics of theecosystem. The interactions between these top predators and their prey are what influences lowertrophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top-down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is ofkelp forestecosystems. In such ecosystems,sea ottersare akeystonepredator. They prey onurchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creatingurchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top-down control has in this example; when the population of otters decreased, the population of the urchins increased. Bottom-up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type ofprimary producers(plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because ofbiomagnificationandecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas whereupwellingbrings nutrients to the surface. There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems. In the fields of management and organization, the terms "top-down" and "bottom-up" are used to describe how decisions are made and/or how change is implemented.[16] A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.[16] A bottom-up approach to changes is one that works from thegrassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".[16] Positive aspects of top-down approaches include their efficiency and superb overview of higher levels;[16]and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them.[17]Evidence suggests this to be true regardless of the content of reforms.[18]A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.[16] Top-down and bottom-up planning are two fundamental approaches inenterprise performance management(EPM), each offering distinct advantages. Top-down planning begins with senior management setting overarching strategic goals, which are then disseminated throughout the organization. This approach ensures alignment with the company's vision and facilitates uniform implementation across departments.[19] Conversely, bottom-up planning starts at the departmental or team level, where specific goals and plans are developed based on detailed operational insights. These plans are then aggregated to form the organization's overall strategy, ensuring that ground-level insights inform higher-level decisions. Many organizations adopt a hybrid approach, known as the countercurrent orintegrated planningmethod, to leverage the strengths of both top-down and bottom-up planning. In this model, strategic objectives set by leadership are informed by operational data from various departments, creating a dynamic and iterative planning process. This integration enhancescollaboration, improves data accuracy, and ensures that strategies are both ambitious and grounded in operational realities. Financial planning & analysis(FP&A) teams play a crucial role in harmonizing these approaches, utilizing tools like driver-based planning and AI-assisted forecasting to create flexible, data-driven plans that adapt to changing business conditions. During the development of new products, designers and engineers rely on both bottom-up and top-down approaches. The bottom-up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top-down approach is taken and almost everything is custom designed. Often theÉcole des Beaux-Artsschool of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with aparti, a basic plan drawing of the overall project.[20] By contrast, theBauhausfocused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design). Top-down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom-up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles.[21][full citation needed]Reflective equilibriumoccurs when there is interaction between top-down and bottom-up reasoning until both are in harmony.[22][full citation needed]That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs whencognitive dissonanceoccurs when reasoners try to resolve top-down with bottom-up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.
https://en.wikipedia.org/wiki/Top-down_and_bottom-up_design#Computer_science
Deep image prioris a type ofconvolutional neural networkused to enhance a given image with no prior training data other than the image itself. A neural network is randomly initialized and used as prior to solveinverse problemssuch asnoise reduction,super-resolution, andinpainting. Image statistics are captured by the structure of a convolutional image generator rather than by any previously learned capabilities. Inverse problemssuch asnoise reduction,super-resolution, andinpaintingcan be formulated as theoptimization taskx∗=minxE(x;x0)+R(x){\displaystyle x^{*}=min_{x}E(x;x_{0})+R(x)}, wherex{\displaystyle x}is an image,x0{\displaystyle x_{0}}a corrupted representation of that image,E(x;x0){\displaystyle E(x;x_{0})}is a task-dependent data term, and R(x) is theregularizer. This forms an energy minimization problem. Deep neural networkslearn a generator/decoderx=fθ(z){\displaystyle x=f_{\theta }(z)}which maps a randomcode vectorz{\displaystyle z}to an imagex{\displaystyle x}. The image corruption method used to generatex0{\displaystyle x_{0}}is selected for the specific application. In this approach, theR(x){\displaystyle R(x)}prior is replaced with the implicit prior captured by the neural network (whereR(x)=0{\displaystyle R(x)=0}for images that can be produced by adeep neural networksandR(x)=+∞{\displaystyle R(x)=+\infty }otherwise). This yields the equation for the minimizerθ∗=argminθE(fθ(z);x0){\displaystyle \theta ^{*}=argmin_{\theta }E(f_{\theta }(z);x_{0})}and the result of the optimization processx∗=fθ∗(z){\displaystyle x^{*}=f_{\theta ^{*}}(z)}. The minimizerθ∗{\displaystyle \theta ^{*}}(typically agradient descent) starts from a randomly initialized parameters and descends into a local best result to yield thex∗{\displaystyle x^{*}}restoration function. A parameter θ may be used to recover any image, including its noise. However, the network is reluctant to pick up noise because it contains high impedance while useful signal offers low impedance. This results in the θ parameter approaching a good-lookinglocal optimumso long as the number of iterations in the optimization process remains low enough not tooverfitdata. Typically, the deep neural network model for deep image prior uses aU-Netlike model without the skip connections that connect the encoder blocks with the decoder blocks. The authors in their paper mention that "Our findings here (and in other similar comparisons) seem to suggest that having deeper architecture is beneficial, and that having skip-connections that work so well for recognition tasks (such as semantic segmentation) is highly detrimental."[1] The principle ofdenoisingis to recover an imagex{\displaystyle x}from a noisy observationx0{\displaystyle x_{0}}, wherex0=x+ϵ{\displaystyle x_{0}=x+\epsilon }. The distributionϵ{\displaystyle \epsilon }is sometimes known (e.g.: profiling sensor and photon noise[2]) and may optionally be incorporated into the model, though this process works well in blind denoising. The quadratic energy functionE(x,x0)=||x−x0||2{\displaystyle E(x,x_{0})=||x-x_{0}||^{2}}is used as the data term, plugging it into the equation forθ∗{\displaystyle \theta ^{*}}yields the optimization problemminθ||fθ(z)−x0||2{\displaystyle min_{\theta }||f_{\theta }(z)-x_{0}||^{2}}. Super-resolutionis used to generate a higher resolution version of image x. The data term is set toE(x;x0)=||d(x)−x0||2{\displaystyle E(x;x_{0})=||d(x)-x_{0}||^{2}}where d(·) is adownsampling operatorsuch asLanczosthat decimates the image by a factor t. Inpaintingis used to reconstruct a missing area in an imagex0{\displaystyle x_{0}}. These missing pixels are defined as the binary maskm∈{0,1}H×V{\displaystyle m\in \{0,1\}^{H\times V}}. The data term is defined asE(x;x0)=||(x−x0)⊙m||2{\displaystyle E(x;x_{0})=||(x-x_{0})\odot m||^{2}}(where⊙{\displaystyle \odot }is theHadamard product). The intuition behind this is that the loss is computed only on the known pixels in the image, and the network is going to learn enough about the image to fill in unknown parts of the image even though the computed loss doesn't include those pixels. This strategy is used to remove image watermarks by treating the watermark as missing pixels in the image. This approach may be extended to multiple images. A straightforward example mentioned by the author is the reconstruction of an image to obtain natural light and clarity from a flash–no-flash pair. Video reconstruction is possible but it requires optimizations to take into account the spatial differences. See Astronomy Picture of the Day (APOD) of 2024-02-18[4]
https://en.wikipedia.org/wiki/Deep_image_prior
Information technology law(IT law), also known asinformation, communication and technology law(ICT law) orcyberlaw, concerns the juridical regulation ofinformation technology, its possibilities and the consequences of its use, includingcomputing, software coding,artificial intelligence, theinternetand virtual worlds. The ICT field of law comprises elements of various branches of law, originating under various acts or statutes of parliaments, the common and continental law and international law. Some important areas it covers are information and data, communication, and information technology, both software and hardware and technical communications technology, including coding and protocols. Due to the shifting and adapting nature of the technological industry, the nature, source and derivation of this information legal system and ideology changes significantly across borders, economies and in time. As a base structure, Information technology law is related to primarily governing dissemination of both (digitized) information andsoftware,information securityand crossing-bordercommerce. It raises specific issues ofintellectual property,contractlaw,criminal lawandfundamental rightslikeprivacy, theright to self-determinationandfreedom of expression. Information technology law has also been heavily invested of late in issues such as obviating risks ofdata breachesandartificial intelligence. Information technology law can also relate directly to dissemination and utlilzation of information within the legal industry, dubbedlegal informatics. The nature of this utilisation of data and information technology platform is changing heavily with the advent ofArtificial Intelligencesystems, with major lawfirms in theUnited States of America,Australia,China, and theUnited Kingdomreporting pilot programs of Artificial Intelligence programs to assist in practices such as legal research, drafting and document review. IT law does not constitute a separate area of law; rather, it encompasses aspects of contract, intellectual property, privacy and data protection laws.Intellectual propertyis an important component of IT law, includingcopyrightandauthors' rights, rules onfair use, rules oncopy protectionfor digital media and circumvention of such schemes. The area ofsoftware patentshas beencontroversial, and is still evolving in Europe and elsewhere.[1][page needed] The related topics ofsoftware licenses,end user license agreements,free software licensesandopen-source licensescan involve discussion of product liability, professional liability of individual developers, warranties, contract law, trade secrets and intellectual property. In various countries, areas of the computing and communication industries are regulated – often strictly – by governmental bodies. There are rules on the uses to which computers and computer networks may be put, in particular there are rules onunauthorized access,data privacyandspamming. There are also limits on the use ofencryptionand of equipment which may be used to defeat copy protection schemes. The export of hardware and software between certain states within theUnited Statesis also controlled.[2] There are laws governing trade on the Internet, taxation,consumer protection, and advertising. There are laws oncensorshipversus freedom of expression, rules on public access to government information, and individual access to information held on them by private bodies. There are laws on what data must be retained for law enforcement, and what may not be gathered or retained, for privacy reasons. In certain circumstances and jurisdictions, computer communications may be used in evidence, and to establish contracts. New methods of tapping and surveillance made possible by computers have wildly differing rules on how they may be used by law enforcement bodies and as evidence in court. Computerized voting technology, from polling machines to internet and mobile-phone voting, raise a host of legal issues. Some states limit access to the Internet, by law as well as by technical means. Global computer-based communications cut across territorial borders; issues of regulation,jurisdictionandsovereigntyhave therefore quickly come to the fore in the era of theInternet. They have been solved pretty quickly as well, because cross-border communication, negotiating or ordering was nothing new; new were the massive amounts of contacts, the possibilities of hiding one's identity and sometime later thecolonisationof the terrain by corporations.[3] Jurisdiction is an aspect ofstate sovereigntyand it refers to judicial, legislative and administrative competence. Although jurisdiction is an aspect of sovereignty, it is not coextensive with it. The laws of a nation may have extraterritorial impact extending the jurisdiction beyond the sovereign and territorial limits of that nation. The medium of the Internet, likeelectrical telegraph,telephoneorradio, does not explicitly recognize sovereignty and territorial limitations.[4][page needed]There is no uniform, international jurisdictional law of universal application, and such questions are generally a matter of international treaties and contracts, orconflict of laws, particularly private international law. An example would be where the contents stored on a server located in the United Kingdom, by a citizen of France, and published on a web site, are legal in one country and illegal in another. In the absence of a uniform jurisdictional code, legal practitioners and judges have solved these kind of questions according the general rules for conflict of law; governments and supra-national bodies did design outlines for new legal frameworks. Whether to treat the Internet as if it were physical space and thus subject to a given jurisdiction's laws, or that the Internet should have a legal framework of its own has been questioned. Those who favor the latter view often feel that government should leave the Internet to self-regulate. American poetJohn Perry Barlow, for example, has addressed the governments of the world and stated, "Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different".[5]Another view can be read from awiki-website with the name "An Introduction to Cybersecession",[6]that argues for ethical validation of absolute anonymity on the Internet. It compares the Internet with the human mind and declares: "Human beings possess a mind, which they are absolutely free to inhabit with no legal constraints. Human civilization is developing its own (collective) mind. All we want is to be free to inhabit it with no legal constraints. Since you make sure we cannot harm you, you have no ethical right to intrude our lives. So stop intruding!"[7]The project is defining "you" as "all governments", "we" is undefined. Some scholars argue for more of a compromise between the two notions, such asLawrence Lessig's argument that "The problem for law is to work out how the norms of the two communities are to apply given that the subject to whom they apply may be in both places at once."[8] With the internationalism of the Internet and the rapid growth of users,jurisdictionbecame a more difficult area than before, and in the beginning courts in different countries have taken various views on whether they have jurisdiction over items published on the Internet, or business agreements entered into over the Internet. This can cover areas from contract law, trading standards and tax, through rules onunauthorized access,data privacyandspammingto areas of fundamental rights such as freedom of speech and privacy, via state censorship, to criminal law with libel or sedition. The frontier idea that laws do not apply in "cyberspace" is however not true in a legal sense. In fact, conflicting laws from different jurisdictions may apply, simultaneously, to the same event. The Internet does not tend to make geographical and jurisdictional boundaries clear, but both Internet technology (hardware), the providers of services and its users remain in physical jurisdictions and are subject to laws independent of their presence on the Internet.[9]As such, a single transaction may involve the laws of at least three jurisdictions: So a user in one of the United States conducting a transaction with another user that lives in theUnited Kingdom, through a server in Canada, could theoretically be subject to the laws of all three countries and of international treaties as they relate to the transaction at hand.[10] In practical terms, a user of the Internet is subject to the laws of the state or nation within which he or she goes online. Thus, in the U.S., in 1997,Jake Bakerfaced criminal charges for his e-conduct, and numerous users of peer-to-peerfile-sharingsoftware were subject to civil lawsuits forcopyright infringement. This system runs into conflicts, however, when these suits are international in nature. Simply put, legal conduct in one nation may be decidedly illegal in another. In fact, even different standards concerning theburden of proofin a civil case can cause jurisdictional problems. For example, an American celebrity, claiming to be insulted by an online American magazine, faces a difficult task of winning a lawsuit against that magazine forlibel. But if the celebrity has ties, economic or otherwise, to England, he or she can sue for libel in the English court system, where theburden of proof for establishing defamationmay make the case more favorable to the plaintiff. Internet governanceis a live issue in international fora such as theInternational Telecommunication Union(ITU), and the role of the current US-based co-ordinating body, theInternet Corporation for Assigned Names and Numbers(ICANN) was discussed in the UN-sponsoredWorld Summit on the Information Society(WSIS) in December 2003. Directives, Regulations and other laws regulating information technology (including the internet, e-commerce, social media and data privacy) in the EU include: As of 2020, theEuropean Unioncopyrightlaw consists of 13 directives and 2 regulations, harmonising the essential rights of authors, performers, producers and broadcasters. The legal framework reduces national discrepancies, and guarantees the level of protection needed to foster creativity and investment in creativity.[11]Many of the directives reflect obligations under theBerne Conventionand theRome Convention, as well as the obligations of the EU and its Member States under the World Trade Organisation 'TRIPS' Agreementand the two 1996 World Intellectual Property Organisation (WIPO) Internet Treaties: theWIPO Copyright Treatyand theWIPO Performances and Phonograms Treaty. Two other WIPO Treaties signed in 2012 and 2016, are theBeijing Treaty on the Protection of Audiovisual Performancesand theMarrakesh VIP Treatyto Facilitate Access to Published Works for Persons who are Blind, Visually Impaired or otherwise Print Disabled. Moreover, free-trade agreements, which the EU concluded with a large number of third countries, reflect many provisions of EU law. In 2022 theEuropean Parliamentdid adopt landmark laws forinternet platforms, the new rules will improve internet consumer protection and supervision of online platforms, theDigital Services Act(DSA) and theDigital Markets Act(DMA). Thelawthat regulates aspects of the Internet must be considered in the context of the geographic scope of the technical infrastructure of Internet and state borders that are crossed in processing data around the globe. The global structure of theInternetraises not only jurisdictional issues, that is, the authority to make and enforce laws affecting the Internet, but made corporations and scholars raise questions concerning the nature of the laws themselves. In their essay "Law and Borders – The Rise of Law in Cyberspace", from 2008,David R. JohnsonandDavid G. Postargue that territorially-based law-making and law-enforcing authorities find this new environment deeply threatening and give a scientific voice to the idea that became necessary for the Internet to govern itself. Instead of obeying the laws of a particular country, "Internet citizens" will obey the laws of electronic entities like service providers. Instead of identifying as a physical person, Internet citizens will be known by their usernames or email addresses (or, more recently, by their Facebook accounts). Over time, suggestions that the Internet can be self-regulated as being its own trans-national "nation" are being supplanted by a multitude of external and internal regulators and forces, both governmental and private, at many different levels. The nature of Internet law remains a legalparadigm shift, very much in the process of development.[12] Leaving aside the most obvious examples of governmental content monitoring andinternet censorshipin nations likeChina,Saudi Arabia,Iran, there are four primary forces or modes of regulation of the Internet derived from a socioeconomic theory referred to asPathetic dot theorybyLawrence Lessigin his 1999 book,Code and Other Laws of Cyberspace: These forces or regulators of the Internet do not act independently of each other. For example, governmental laws may be influenced by greater societal norms, and markets affected by the nature and quality of the code that operates a particular system. Another major area of interest isnet neutrality, which affects the regulation of the infrastructure of the Internet. Though not obvious to most Internet users, every packet of data sent and received by every user on the Internet passes through routers and transmission infrastructure owned by a collection of private and public entities, including telecommunications companies, universities, and governments. This issue has been handled in the paast for electrical telegraph, telephone and cable TV. A critical aspect is that laws in force in one jurisdiction have the potential to have effects in other jurisdictions when host servers or telecommunications companies are affected. The Netherlands became in 2013 the first country in Europe and the second in the world, after Chile, to pass law relating to it.[13][14]In the U.S, on 12 March 2015, the FCC released the specific details of its new net neutrality rule. On 13 April 2015, the FCC published the final rule on its new regulations. Article 19 of theUniversal Declaration of Human Rightscalls for the protection offree opinion and expression.[15]Which includes right such as freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. In comparison to print-based media, the accessibility and relative anonymity of internet has torn down traditional barriers between an individual and his or her ability to publish. Any person with an internet connection has the potential to reach an audience of millions. These complexities have taken many forms, three notable examples being theJake Bakerincident, in which the limits of obscene Internet postings were at issue, the controversial distribution of theDeCSScode, andGutnick v Dow Jones, in which libel laws were considered in the context of online publishing. The last example was particularly significant because it epitomized the complexities inherent to applying one country's laws (nation-specific by definition) to the internet (international by nature). In 2003,Jonathan Zittrainconsidered this issue in his paper, "Be Careful What You Ask For: Reconciling a Global Internet and Local Law".[16] In the UK in 2006 the case ofKeith-Smith v Williamsconfirmed that existinglibellaws applied to internet discussions.[17] In terms of thetortliability of ISPs and hosts of internet forums, Section 230(c) of theCommunications Decency Actmay provide immunity in the United States.[18] In many countries, speech through ICT has proven to be another means of communication which has been regulated by the government. The "Open Net Initiative" by theHarvard UniversityBerkman Klein Center, theUniversity of Torontoand the Canadian SecDev Group[19][20]whose mission statement is "to investigate and challenge state filtration and surveillance practices" to "...generate a credible picture of these practices," has released numerous reports documenting the filtration of internet-speech in various countries. WhileChinahas thus far (2011) proven to be the most rigorous in its attempts to filter unwanted parts of the internet from its citizens,[21]many other countries – includingSingapore,Iran,Saudi Arabia, andTunisia– have engaged in similar practices ofInternet censorship. In one of the most vivid examples of information control, the Chinese government for a short time transparently forwarded requests to theGooglesearch engine to its own, state-controlled search engines.[citation needed] These examples of filtration bring to light many underlying questions concerning the freedom of speech. For example, do government have a legitimate role in limiting access to information? And if so, what forms of regulation are acceptable? For example, some argue that the blocking of "blogspot" and other websites inIndiafailed to reconcile the conflicting interests of speech and expression on the one hand and legitimate government concerns on the other hand.[22] At the close of the 19th century, concerns aboutprivacycaptivated the general public, and led to the 1890 publication of Samuel Warren and Louis Brandeis: "The Right to Privacy".[23]The vitality of this article can be seen today, when examining the USSC decision ofKyllo v. United States, 533 U.S. 27 (2001) where it is cited by the majority, those in concurrence, and even those in dissent.[24] The motivation of both authors to write such an article is heavily debated amongst scholars, however, two developments during this time give some insight to the reasons behind it. First, the sensationalistic press and the concurrent rise and use of "yellow journalism" to promote the sale of newspapers in the time following the Civil War brought privacy to the forefront of the public eye. The other reason that brought privacy to the forefront of public concern was the technological development of "instant photography". This article set the stage for all privacy legislation to follow during the 20 and 21st centuries. In 1967, the United States Supreme Court decision inKatz v United States, 389 U.S. 347 (1967) established what is known as the Reasonable Expectation of Privacy Test to determine the applicability of the Fourth Amendment in a given situation. The test was not noted by the majority, but instead it was articulated by the concurring opinion of Justice Harlan. Under this test, 1) a person must exhibit an "actual (subjective) expectation of privacy" and 2) "the expectation [must] be one that society is prepared to recognize as 'reasonable'". Inspired by theWatergate scandal, theUnited States Congressenacted the Privacy Act of 1974 just four months after the resignation of then PresidentRichard Nixon. In passing this Act, Congress found that "the privacy of an individual is directly affected by the collection, maintenance, use, and dissemination of personal information by Federal agencies" and that "the increasing use of computers and sophisticated information technology, while essential to the efficient operations of the Government, has greatly magnified the harm to individual privacy that can occur from any collection, maintenance, use, or dissemination of personal information". Codified at 50 U.S.C. §§ 1801–1811, this act establishes standards and procedures for use of electronic surveillance to collect "foreign intelligence" within the United States. §1804(a)(7)(B). FISA overrides the Electronic Communications Privacy Act during investigations when foreign intelligence is "a significant purpose" of said investigation.50 U.S.C.§ 1804(a)(7)(B) and §1823(a)(7)(B). Another interesting result of FISA, is the creation of the Foreign Intelligence Surveillance Court (FISC). All FISA orders are reviewed by this special court of federal district judges. The FISC meets in secret, with all proceedings usually also held from both the public eye and those targets of the desired surveillance.For more information see:Foreign Intelligence Act The ECPA represents an effort by the United States Congress to modernize federal wiretap law. The ECPA amended Title III (see:Omnibus Crime Control and Safe Streets Act of 1968) and included two new acts in response to developing computer technology and communication networks. Thus the ECPA in the domestic venue into three parts: 1) Wiretap Act, 2) Stored Communications Act, and 3) The Pen Register Act. The DPPA was passed in response to states selling motor vehicle records to private industry. These records contained personal information such as name, address, phone number, SSN, medical information, height, weight, gender, eye color, photograph and date of birth. In 1994, Congress passed the Driver's Privacy Protection (DPPA), 18 U.S.C. §§ 2721–2725, to cease this activity.For more information see:Driver's Privacy Protection Act -This act authorizes widespread sharing of personal information by financial institutions such as banks, insurers, and investment companies. The GLBA permits sharing of personal information between companies joined or affiliated as well as those companies unaffiliated. To protect privacy, the act requires a variety of agencies such as the SEC, FTC, etc. to establish "appropriate standards for the financial institutions subject to their jurisdiction" to "insure security and confidentiality of customer records and information" and "protect against unauthorized access" to this information.15 U.S.C.§ 6801For more information see:Gramm-Leach-Bliley Act -Passed by Congress in 2002, the Homeland Security Act,6 U.S.C.§ 222, consolidated 22 federal agencies into what is commonly known today as the Department of Homeland Security (DHS). The HSA, also created a Privacy Office under the DoHS. The Secretary of Homeland Security must "appoint a senior official to assume primary responsibility for privacy policy." This privacy official's responsibilities include but are not limited to: ensuring compliance with the Privacy Act of 1974, evaluating "legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the Federal Government", while also preparing an annual report to Congress.For more information see:Homeland Security Act -This Act mandates that intelligence be "provided in its most shareable form" that the heads of intelligence agencies and federal departments "promote a culture of information sharing." The IRTPA also sought to establish protection of privacy and civil liberties by setting up a five-member Privacy and Civil Liberties Oversight Board. This Board offers advice to both the President of the United States and the entire executive branch of the Federal Government concerning its actions to ensure that the branch's information sharing policies are adequately protecting privacy and civil liberties.For more information see:Intelligence Reform and Terrorism Prevention Act Centers and groups for the study of cyberlaw and related areas Topics related to cyberlaw
https://en.wikipedia.org/wiki/Legal_aspects_of_computing
Clickjacking(classified as auser interface redress attackorUI redressing) is amalicious techniqueof tricking auserinto clicking on something different from what the user perceives, thus potentially revealingconfidentialinformation or allowing others to take control of their computer while clicking on seemingly innocuous objects, includingweb pages.[1][2][3][4][5] Clickjacking is an instance of theconfused deputy problem, wherein a computer is tricked into misusing its authority.[6] In 2002, it had been noted that it was possible to load a transparent layer over aweb pageand have the user's input affect the transparent layer without the user noticing.[7]However, fixes only started to trickle in around 2004,[8]and the general problem was mostly ignored as a major issue until 2008.[7] In 2008, Jeremiah Grossman and Robert Hansen (of SecTheory) had discovered thatAdobe Flash Playerwas able to be clickjacked, allowing anattackerto gain access to a user's computer without the user's knowledge.[7]Grossman and Hansen coined the term "clickjacking",[9][10]aportmanteauof the words "click" and "hijacking".[7] As more attacks of a similar nature were discovered, the focus of the term "UI redressing" was changed to describe the category of these attacks, rather than just clickjacking itself.[7] One form of clickjacking takes advantage of vulnerabilities that are present in applications or web pages to allow the attacker to manipulate the user's computer for their own advantage. For example, a clickjacked page tricks a user into performing undesired actions by clicking on concealed links. On a clickjacked page, the attackers load another page over the original page in a transparent layer to trick the user into taking actions, the outcomes of which will not be the same as the user expects. The unsuspecting users think that they are clicking visible buttons, while they are actually performing actions on the invisible page, clicking buttons of the page below the layer. The hidden page may be an authentication page; therefore, the attackers can trick users into performing actions which the users never intended. There is no way of tracing such actions to the attackers later, as the users would have been genuinely authenticated on the hidden page. Classic clickjacking refers to a situation when anattackeruses hidden layers onweb pagesto manipulate the actions a user's cursor does, resulting in misleading the user about what truly is being clicked on.[18] A user might receive an email with a link to a video about a news item, but another webpage, say a product page onAmazon, can be "hidden" on top or underneath the "PLAY" button of the news video. The user tries to "play" the video but actually "buys" the product from Amazon. The hacker can only send a single click, so they rely on the fact that the visitor is both logged intoAmazonand has 1-click ordering enabled. While technical implementation of these attacks may be challenging due to cross-browser incompatibilities, a number of tools such as BeEF orMetasploit Projectoffer almost fully automated exploitation of clients on vulnerable websites. Clickjacking may be facilitated by – or may facilitate – other web attacks, such asXSS.[19][20] Likejacking is amalicious techniqueof tricking users viewing a website into "liking" aFacebookpage or othersocial mediaposts/accounts that they did not intentionally mean to "like".[21]The term "likejacking" came from a comment posted by Corey Ballou in the articleHow to "Like" Anything on the Web (Safely),[22]which is one of the first documented postings explaining the possibility of malicious activity regarding Facebook's "like" button.[23] According to an article inIEEE Spectrum, a solution to likejacking was developed at one of Facebook'shackathons.[24]A "Like"bookmarkletis available that avoids the possibility of likejacking present in theFacebook like button.[25] Nested clickjacking, compared to classic clickjacking, works by embedding a malicious web frame between two frames of the original, harmlessweb page: that from the framed page and that which is displayed on the top window. This works due to a vulnerability in the HTTP headerX-Frame-Options, in which, when this element has the valueSAMEORIGIN, theweb browseronly checks the two aforementioned layers. The fact that additional frames can be added in between these two while remaining undetected means thatattackerscan use this for their benefit. In the past, withGoogle+and the faulty version ofX-Frame-Options,attackerswere able to insert frames of their choice by using the vulnerability present inGoogle's Image Search engine. In between the image display frames, which were present in Google+ as well, these attacker-controlled frames were able to load and not be restricted, allowing for theattackersto mislead whomever came upon the image display page.[13] CursorJacking is a UI redressing technique to change the cursor from the location the user perceives, discovered in 2010 by Eddy Bordi, a researcher at vulnerability.fr.[26]Marcus Niemietz demonstrated this with a custom cursor icon, and in 2012 Mario Heiderich did so by hiding the cursor.[27] Jordi Chancel, a researcher at Alternativ-Testing.fr, discovered a CursorJacking vulnerability using Flash, HTML and JavaScript code in Mozilla Firefox on Mac OS X systems (fixed in Firefox 30.0) which can lead to arbitrary code execution and webcam spying.[28] A second CursorJacking vulnerability was again discovered by Jordi Chancel inMozilla FirefoxonMac OS Xsystems (fixed in Firefox 37.0) using once againFlash,HTMLandJavaScriptcode which can also lead to spying via a webcam and the execution of a malicious addon, allowing the execution of malware on the affected user's computer.[29] Different from other clickjacking techniques that redress a UI, MouseJack is a wireless hardware-based UI vulnerability first reported by Marc Newlin of Bastille.net in 2016 which allows external keyboard input to be injected into vulnerable dongles.[30]Logitechsupplied firmware patches but other manufacturers failed to respond to this vulnerability.[31] In Browserless clickjacking,attackersutilize vulnerabilities in programs to replicate classic clickjacking in them, without being required to use the presence of a web browser. This method of clickjacking is mainly prevalent among mobile devices, usually onAndroid devices, especially due to the way in whichtoast notificationswork. Becausetoast notificationshave a small delay in between the moment the notification is requested and the moment the notification actually displays on-screen,attackersare capable of using that gap to create a dummy button that lies hidden underneath the notification and can still be clicked on.[7] CookieJacking is a form of clickjacking in which cookies are stolen from the victim'sweb browsers. This is done by tricking the user into dragging an object which seemingly appears harmless but is in fact making the user select the entire content of the cookie being targeted. From there, the attacker can acquire the cookie and all of the data that it possesses.[15][clarification needed] In fileJacking, attackers use the web browser's capability to navigate through the computer and access computer files in order to acquire personal data. It does so by tricking the user into establishing an active file server (through the file and folder selection window that browsers use). With this, attackers can now access and take files from their victims' computers.[16] A 2014 paper from researcher at theCarnegie Mellon Universityfound that while browsers refuse to autofill if the protocol on the current login page is different from the protocol at the time the password was saved, somepassword managerswould insecurely fill in passwords for the http version of https-saved passwords. Most managers did not protect againstiFrame- andredirection-basedattacksand exposed additional passwords wherepassword synchronizationhad been used between multiple devices.[17] Protection against clickjacking (including likejacking) can be added toMozilla Firefoxdesktop and mobile[32]versions by installing theNoScriptadd-on: its ClearClick feature, released on 8 October 2008, prevents users from clicking on invisible or "redressed" page elements of embedded documents or applets.[33]According to Google's "Browser Security Handbook" from 2008, NoScript's ClearClick is a "freely available product that offers a reasonable degree of protection" against Clickjacking.[34]Protection from the newer cursorjacking attack was added to NoScript 2.2.8 RC1.[27] The "NoClickjack" web browser add-on (browser extension) adds client-side clickjack protection for users ofGoogle Chrome,Mozilla Firefox,OperaandMicrosoft Edgewithout interfering with the operation of legitimate iFrames. NoClickjack is based on technology developed for GuardedID. The NoClickjack add-on is free of charge. GuardedID (a commercial product) includes client-side clickjack protection for users of Internet Explorer without interfering with the operation of legitimate iFrames.[35]GuardedID clickjack protection forces all frames to become visible. GuardedID teams[clarification needed]with the add-on NoClickjack to add protection forGoogle Chrome,Mozilla Firefox,OperaandMicrosoft Edge. Gazelleis aMicrosoft Researchproject secure web browser based on IE, that uses anOS-like security model and has its own limited defenses against clickjacking.[36]In Gazelle, a window of different origin may only draw dynamic content over another window's screen space if the content it draws is opaque. The Intersection Observer v2 API[37]introduces the concept of tracking the actual "visibility" of a target element as a human being would define it.[38]This allows a framed widget to detect when it's being covered. The feature is enabled by default sinceGoogle Chrome74, released in April 2019.[39]The API is also implemented by otherChromium-basedbrowsers, such as Microsoft Edge and Opera. Web site owners can protect their users against UI redressing (frame based clickjacking) on the server side by including aframekillerJavaScript snippet in those pages they do not want to be included inside frames from different sources.[34] Such JavaScript-based protection is not always reliable. This is especially true on Internet Explorer,[34]where this kind of countermeasure can be circumvented "by design" by including the targeted page inside an<IFRAMESECURITY=restricted>element.[40] Introduced in 2009 inInternet Explorer8 was a new HTTP headerX-Frame-Optionswhich offered a partial protection against clickjacking[41][42]and was adopted by other browsers (Safari,[43]Firefox,[44]Chrome,[45]andOpera[46]) shortly afterwards. The header, when set by website owner, declares its preferred framing policy: values ofDENY,ALLOW-FROMorigin, orSAMEORIGINwill prevent any framing, framing by external sites, or allow framing only by the specified site, respectively. In addition to that, some advertising sites return a non-standardALLOWALLvalue with the intention to allow framing their content on any page (equivalent of not setting X-Frame-Options at all). In 2013 the X-Frame-Options header has been officially published as RFC 7034,[47]but is not an Internet standard. The document is provided for informational purposes only. The W3C's Content Security Policy Level 2 Recommendation provides an alternative security directive, frame-ancestors, which is intended to obsolete the X-Frame-Options header.[48] A security header like X-Frame-Options will not protect users against clickjacking attacks that are not using a frame.[49] Theframe-ancestorsdirective ofContent Security Policy(introduced in version 1.1) canallowor disallow embedding of content by potentially hostile pages using iframe, object, etc. This directive obsoletes the X-Frame-Options directive. If a page is served with both headers, the frame-ancestors policy should be preferred by the browser.[50]—although some popular browsers disobey this requirement.[51] Example frame-ancestors policies:
https://en.wikipedia.org/wiki/Clickjack
Asecurity switchis a hardware device designed to protectcomputers,laptops,smartphonesand similar devices from unauthorized access or operation, distinct from avirtual security switchwhich offerssoftwareprotection. Security switches should be operated by anauthorizeduser only; for this reason, it should be isolated from other devices, in order to prevent unauthorized access, and it should not be possible to bypass it, in order to prevent malicious manipulation. The primary purpose of a security switch is to provide protection againstsurveillance,eavesdropping,malware,spyware, and theft of digital devices. Unlike other protections or techniques, a security switch can provide protection even if security has already been breached, since it does not have any access from other components and is not accessible by software. It can additionally disconnect or block peripheral devices, and perform "man in the middle" operations.[citation needed] A security switch can be used forhuman presence detectionsince it can only be initiated by a human operator. It can also be used as afirewall. A hardware kill switch (HKS) is a physical switch that cuts the signal or power line to the device or disable the chip running them. Googlestarted to work on a hardware kill switch forAIin 2016.[2] In 2019,Apple, and Google, along with a handful of smaller players, are designing “kill switches” that cut the power to the microphones or cameras in their devices. Googles first product that implemented this isNest Hub Max.[1] Hardware kill switches are already available and widely tested on thePinePhone,Librem,Shiftphone, to cut power to the input peripherals (microphone, camera) but also the network connectivity modules (wifi, cellular network).
https://en.wikipedia.org/wiki/Security_switch
In thedesign of experimentsinstatistics, thelady tasting teais arandomized experimentdevised byRonald Fisherand reported in his bookThe Design of Experiments(1935).[1]The experiment is the original exposition of Fisher's notion of anull hypothesis, which is "never proved or established, but is possibly disproved, in the course of experimentation".[2][3] The example is loosely based on an event in Fisher's life. The woman in question,phycologistMuriel Bristol, claimed to be able to tellwhether the tea or the milk was added first to a cup. Her future husband, William Roach, suggested that Fisher give her eight cups, four of each variety, in random order.[4]One could then ask what the probability was for her getting the specific number of cups she identified correct (in fact all eight), but just by chance. Fisher's description is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment.[5]The test used wasFisher's exact test. The experiment provides a subject with eight randomly ordered cups of tea – four prepared by pouring milk and then tea, four by pouring tea and then milk. The subject attempts to select the four cups prepared by one method or the other, and may compare cups directly against each other as desired. The method employed in the experiment is fully disclosed to the subject. Thenull hypothesisis that the subject has no ability to distinguish the teas. In Fisher's approach, there was noalternative hypothesis,[2]unlike in theNeyman–Pearson approach. The test statistic is a simple count of the number of successful attempts to select the four cups prepared by a given method. The distribution of possible numbers of successes, assuming thenull hypothesisis true, can be computed using the number of combinations. Using thecombinationformula, withn=8{\displaystyle n=8}total cups andk=4{\displaystyle k=4}cups chosen, there are(84)=8!4!(8−4)!=70{\displaystyle {\binom {8}{4}}={\frac {8!}{4!(8-4)!}}=70}possible combinations. The frequencies of the possible numbers of successes, given in the final column of this table, are derived as follows. For 0 successes, there is clearly only one set of four choices (namely, choosing all four incorrect cups) giving this result. For one success and three failures, there are four correct cups of which one is selected, which by the combination formula can occur in(41)=4{\textstyle {\binom {4}{1}}=4}different ways (as shown in column 2, withxdenoting a correct cup that is chosen andodenoting a correct cup that is not chosen); and independently of that, there are four incorrect cups of which three are selected, which can occur in(43)=4{\textstyle {\binom {4}{3}}=4}ways (as shown in the second column, this time withxinterpreted as an incorrect cup which is not chosen, andoindicating an incorrect cup which is chosen). Thus a selection of any one correct cup and any three incorrect cups can occur in any of 4×4 = 16 ways. The frequencies of the other possible numbers of successes are calculated correspondingly. Thus the number of successes is distributed according to thehypergeometric distribution. Specifically, for a random variableX{\displaystyle X}equal to the number of successes, we may writeX∼Hypergeometric⁡(N=8,K=4,n=4){\displaystyle X\sim \operatorname {Hypergeometric} (N=8,K=4,n=4)}, whereN{\displaystyle N}is the population size or total number of cups of tea,K{\displaystyle K}is the number of success states in the population or four cups of either type, andn{\displaystyle n}is the number of draws, or four cups. The distribution of combinations for makingkselections out of the2kavailable selections corresponds to thekth row of Pascal's triangle, such that each integer in the row is squared. In this case,k=4{\displaystyle k=4}because 4 teacups are selected from the 8 available teacups. The critical region for rejection of the null of no ability to distinguish was the single case of 4 successes of 4 possible, based on the conventional probability criterion < 5%. This is the critical region because under the null of no ability to distinguish, 4 successes has 1 chance out of 70 (≈ 1.4% < 5%) of occurring, whereas at least 3 of 4 successes has a probability of (16+1)/70 (≈ 24.3% > 5%). Thus,if and only ifthe lady properly categorized all 8 cups was Fisher willing to reject the null hypothesis – effectively acknowledging the lady's ability at a 1.4% significance level (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests. David Salsburgreports that a colleague of Fisher,H. Fairfield Smith, revealed that in the actual experiment the lady succeeded in identifying all eight cups correctly.[6][7]The chance of someone who just guesses of getting all correct, assuming she guesses that any four had the tea put in first and the other four the milk, would be only 1 in 70 (the combinations of 8 taken 4 at a time). David Salsburgpublished apopular sciencebook entitledThe Lady Tasting Tea,[6]which describes Fisher's experiment and ideas onrandomization.Deb Basuwrote that "the famous case of the 'lady tasting tea'" was "one of the two supporting pillars ... of the randomization analysis ofexperimental data."[8]
https://en.wikipedia.org/wiki/Lady_tasting_tea
[1]Incomputing, anemulatorishardwareorsoftwarethat enables onecomputer system(called thehost) to behave like another computer system (called theguest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of acomputer programin an electronic device to emulate (or imitate) another program or device. Manyprinters, for example, are designed to emulateHPLaserJetprinters because a significant amount of software is written specifically for HP models. If a non-HP printer emulates an HP printer, any software designed for an actual HP printer will also function on the non-HP device, producing equivalent print results. Since at least the 1990s, manyvideo gameenthusiasts and hobbyists have used emulators to play classicarcade gamesfrom the 1980s using the games' original 1980s machine code and data, which is interpreted by a current-era system, and to emulate oldvideo game consoles(seevideo game console emulator). A hardware emulator is an emulator which takes the form of a hardware device. Examples include the DOS-compatible card installed in some 1990s-eraMacintoshcomputers, such as theCentris 610orPerforma 630, that allowed them to runpersonal computer(PC) software programs andfield-programmable gate array-basedhardware emulators. TheChurch–Turing thesisimplies that theoretically, any operating environment can be emulated within any other environment, assuming memory limitations are ignored. However, in practice, it can be quite difficult, particularly when the exact behavior of the system to be emulated is not documented and has to be deduced throughreverse engineering. It also says nothing about timing constraints; if the emulator does not perform as quickly as it did using the original hardware, the software inside the emulation may run much more slowly (possibly triggering timer interrupts that alter behavior). "Can aCommodore 64emulateMS-DOS?" Yes, it's possible for a [Commodore] 64 to emulate an IBM PC [which uses MS-DOS], in the same sense that it's possible to bail outLake Michiganwith ateaspoon. Most emulators just emulate a hardware architecture—if operating system firmware or software is required for the desired software, it must be provided as well (and may itself be emulated). Both the OS and the software will then beinterpretedby the emulator, rather than being run by native hardware. Apart from this interpreter for the emulated binarymachine's language, some other hardware (such as input or output devices) must be provided in virtual form as well; for example, if writing to a specific memory location should influence what is displayed on the screen, then this would need to be emulated. While emulation could, if taken to the extreme, go down to the atomic level, basing its output on a simulation of the actual circuitry from a virtual power source, this would be a highly unusual solution. Emulators typically stop at a simulation of the documented hardware specifications and digital logic. Sufficient emulation of some hardware platforms requires extreme accuracy, down to the level of individual clock cycles, undocumented features, unpredictable analog elements, and implementation bugs. This is particularly the case with classic home computers such as theCommodore 64, whose software often depends on highly sophisticated low-level programming tricks invented by game programmers and the "demoscene". In contrast, some other platforms have had very little use of direct hardware addressing, such as an emulator for the PlayStation 4.[citation needed]In these cases, a simplecompatibility layermay suffice. This translates system calls for the foreign system into system calls for the host system e.g., the Linux compatibility layer used on *BSD to run closed source Linux native software onFreeBSDandNetBSD.[3]For example, while theNintendo 64graphic processor was fully programmable, most games used one of a few pre-made programs, which were mostly self-contained and communicated with the game viaFIFO; therefore, many emulators do not emulate the graphic processor at all, but simply interpret the commands received from the CPU as the original program would. Developers of software forembedded systemsorvideo game consolesoften design their software on especially accurate emulators calledsimulatorsbefore trying it on the real hardware. This is so that software can be produced and tested before the final hardware exists in large quantities, so that it can be tested without taking the time to copy the program to be debugged at a low level and without introducing the side effects of adebugger. In many cases, the simulator is actually produced by the company providing the hardware, which theoretically increases its accuracy. Math co-processor emulators allow programs compiled with math instructions to run on machines that do not have the co-processor installed, but the extra work done by the CPU may slow the system down. If a math coprocessor is not installed or present on the CPU, when the CPU executes any co-processor instruction it will make a determined interrupt (coprocessor not available), calling the math emulator routines. When the instruction is successfully emulated, the program continues executing. Logic simulation is the use of a computer program to simulate the operation of a digital circuit such as a processor.[1]This is done after a digital circuit has been designed in logic equations, but before the circuit is fabricated in hardware. Functional simulation is the use of a computer program to simulate the execution of a second computer program written in symbolicassembly languageorcompilerlanguage, rather than in binarymachine code. By using a functional simulator, programmers can execute and trace selected sections of source code to search for programming errors (bugs), without generating binary code. This is distinct from simulating execution of binary code, which is software emulation. The first functional simulator was written byAutoneticsabout 1960[citation needed]for testing assembly language programs for later execution in military computerD-17B. This made it possible for flight programs to be written, executed, and tested before D-17B computer hardware had been built. Autonetics also programmed a functional simulator for testing flight programs for later execution in the military computerD-37C. Video game console emulators are programs that allow a personal computer or video game console to emulate another video game console. They are most often used to play older 1980s to 2000s-era video games on modern personal computers and more contemporary video game consoles. They are also used to translate games into other languages, to modify existing games, and in the development process of "home brew"DIYdemos and in the creation of new games for older systems. TheInternethas helped in the spread of console emulators, as most - if not all - would be unavailable for sale in retail outlets. Examples of console emulators that have been released in the last few decades are:RPCS3,Dolphin,Cemu,PCSX2,PPSSPP,ZSNES,Citra,ePSXe,Project64,Visual Boy Advance,Nestopia, andYuzu. Due to their popularity, emulators have been impersonated by malware. Most of these emulators are for video game consoles like the Xbox 360, Xbox One, Nintendo 3DS, etc. Generally such emulators make currently impossible claims such as being able to runXbox OneandXbox 360games in a single program.[4] As computers andglobal computer networkscontinued to advance and emulator developers grew more skilled in their work, the length of time between the commercial release of a console and its successful emulation began to shrink.Fifth generationconsoles such asNintendo 64,PlayStationandsixth generationhandhelds, such as theGame Boy Advance, saw significant progress toward emulation during their production. This led to an effort by console manufacturers to stop unofficial emulation, but consistent failures such asSega v. Accolade977 F.2d 1510 (9th Cir. 1992),Sony Computer Entertainment, Inc. v. Connectix Corporation203 F.3d 596 (2000), andSony Computer Entertainment America v. Bleem214 F.3d 1022 (2000),[5]have had the opposite effect. According to all legal precedents, emulation is legal within the United States. However, unauthorized distribution of copyrighted code remains illegal, according to both country-specificcopyrightand international copyright law under theBerne Convention.[6][better source needed]Under United States law, obtaining adumpedcopy of the original machine'sBIOSis legal under the rulingLewis Galoob Toys, Inc. v. Nintendo of America, Inc., 964 F.2d 965 (9th Cir. 1992) asfair useas long as the user obtained a legally purchased copy of the machine. To mitigate this however, several emulators for platforms such asGame Boy Advanceare capable of running without a BIOS file, using high-level emulation to simulate BIOS subroutines at a slight cost in emulation accuracy.[7][8][9] Terminal emulators are software programs that provide modern computers and devices interactive access to applications running onmainframe computeroperating systems or other host systems such asHP-UXorOpenVMS. Terminals such as theIBM 3270orVT100and many others are no longer produced as physical devices. Instead, software running on modern operating systems simulates a "dumb" terminal and is able to render the graphical and text elements of the host application, send keystrokes and process commands using the appropriate terminal protocol. Some terminal emulation applications includeAttachmate Reflection,IBM Personal Communications, andMicro FocusRumba. Other types of emulators include: Typically, an emulator is divided intomodulesthat correspond roughly to the emulated computer's subsystems. Most often, an emulator will be composed of the following modules: Buses are often not emulated, either for reasons of performance or simplicity, and virtual peripherals communicate directly with the CPU or the memory subsystem. It is possible for the memory subsystem emulation to be reduced to simply an array of elements each sized like an emulated word; however, this model fails very quickly as soon as any location in the computer's logical memory does not matchphysical memory. This clearly is the case whenever the emulated hardware allows for advanced memory management (in which case, theMMUlogic can be embedded in the memory emulator, made a module of its own, or sometimes integrated into the CPU simulator). Even if the emulated computer does not feature an MMU, though, there are usually other factors that break the equivalence between logical and physical memory: many (if not most) architectures offermemory-mapped I/O; even those that do not often have a block of logical memory mapped toROM, which means that the memory-array module must be discarded if the read-only nature of ROM is to be emulated. Features such asbank switchingorsegmentationmay also complicate memory emulation. As a result, most emulators implement at least two procedures for writing to and reading from logical memory, and it is these procedures' duty to map every access to the correct location of the correct object. On abase-limit addressingsystem where memory from address0to addressROMSIZE-1is read-only memory, while the rest is RAM, something along the line of the following procedures would be typical: The CPU simulator is often the most complicated part of an emulator. Many emulators are written using "pre-packaged" CPU simulators, in order to concentrate on good and efficient emulation of a specific machine. The simplest form of a CPU simulator is aninterpreter, which is a computer program that follows the execution flow of the emulated program code and, for every machine code instruction encountered, executes operations on the host processor that are semantically equivalent to the original instructions. This is made possible by assigning avariableto eachregisterandflagof the simulated CPU. The logic of the simulated CPU can then more or less be directly translated into software algorithms, creating a software re-implementation that basically mirrors the original hardware implementation. The following example illustrates how CPU simulation can be accomplished by an interpreter. In this case, interrupts are checked-for before every instruction executed, though this behavior is rare in real emulators for performance reasons (it is generally faster to use a subroutine to do the work of an interrupt). Interpreters are very popular as computer simulators, as they are much simpler to implement than more time-efficient alternative solutions, and their speed is more than adequate for emulating computers of more than roughly a decade ago on modern machines. However, the speed penalty inherent in interpretation can be a problem when emulating computers whose processor speed is on the sameorder of magnitudeas the host machine[dubious–discuss]. Until not many years ago, emulation in such situations was considered completely impractical by many[dubious–discuss]. What allowed breaking through this restriction were the advances indynamic recompilationtechniques[dubious–discuss]. Simplea prioritranslation of emulated program code into code runnable on the host architecture is usually impossible because of several reasons: Various forms of dynamic recompilation, including the popularJust In Time compiler (JIT)technique, try to circumvent these problems by waiting until the processor control flow jumps into a location containing untranslated code, and only then ("just in time") translates a block of the code into host code that can be executed. The translated code is kept in acodecache[dubious–discuss], and the original code is not lost or affected; this way, even data segments can be (meaninglessly) translated by the recompiler, resulting in no more than a waste of translation time. Speed may not be desirable as some older games were not designed with the speed of faster computers in mind. A game designed for a 30 MHz PC with a level timer of 300 game seconds might only give the player 30 seconds on a 300 MHz PC. Other programs, such as some DOS programs, may not even run on faster computers. Particularly when emulating computers which were "closed-box", in which changes to the core of the system were not typical, software may use techniques that depend on specific characteristics of the computer it ran on (e.g. its CPU's speed) and thus precise control of the speed of emulation is important for such applications to be properly emulated. Most emulators do not, as mentioned earlier, emulate the mainsystem bus; each I/O device is thus often treated as a special case, and no consistent interface for virtual peripherals is provided. This can result in a performance advantage, since each I/O module can be tailored to the characteristics of the emulated device; designs based on a standard, unified I/OAPIcan, however, rival such simpler models, if well thought-out, and they have the additional advantage of "automatically" providing a plug-in service through which third-party virtual devices can be used within the emulator. A unified I/O API may not necessarily mirror the structure of the real hardware bus: bus design is limited by several electric constraints and a need for hardwareconcurrencymanagement that can mostly be ignored in a software implementation. Even in emulators that treat each device as a special case, there is usually a common basic infrastructure for: Emulation is one strategy in pursuit ofdigital preservationand combatingobsolescence. Emulation focuses on recreating an original computer environment, which can be time-consuming and difficult to achieve, but valuable because of its ability to maintain a closer connection to the authenticity of the digital object, operating system, or even gaming platform.[10]Emulation addresses the originalhardwareandsoftwareenvironment of the digital object, and recreates it on a current machine.[11]The emulator allows the user to have access to any kind ofapplicationoroperating systemon a currentplatform, while thesoftwareruns as it did in its original environment.[12]Jeffery Rothenberg, an early proponent of emulation as adigital preservationstrategy states, "the ideal approach would provide a singleextensible, long-term solution that can be designed once and for all and applied uniformly, automatically, and in organized synchrony (for example, at every refresh cycle) to all types of documents and media".[13]He further states that this should not only apply to out of date systems, but also be upwardly mobile to future unknown systems.[14]Practically speaking, when a certain application is released in a new version, rather than addresscompatibilityissues andmigrationfor every digital object created in the previous version of thatapplication, one could create an emulator for theapplication, allowing access to all of said digital objects. Because of its primary use of digital formats,new media artrelies heavily on emulation as a preservation strategy. Artists such asCory Arcangelspecialize in resurrecting obsolete technologies in their artwork and recognize the importance of adecentralizedand deinstitutionalized process for the preservation of digital culture. In many cases, the goal of emulation in new media art is to preserve a digital medium so that it can be saved indefinitely and reproduced without error, so that there is no reliance on hardware that ages and becomes obsolete. The paradox is that the emulation and the emulator have to be made to work on future computers.[15] Emulation techniques are commonly used during the design and development of new systems. It eases the development process by providing the ability to detect, recreate and repair flaws in the design even before the system is actually built.[16]It is particularly useful in the design ofmulti-coresystems, where concurrency errors can be very difficult to detect and correct without the controlled environment provided by virtual hardware.[17]This also allows the software development to take place before the hardware is ready,[18]thus helping to validate design decisions and give a little more control. The word "emulator" was coined in 1963 at IBM[19]during development of the NPL (IBM System/360) product line, using a "newcombinationofsoftware,microcode, andhardware".[20]They discovered that simulation using additional instructions implemented inmicrocodeand hardware, instead of software simulation using only standard instructions, to execute programs written for earlier IBM computers dramatically increased simulation speed. Earlier, IBM providedsimulatorsfor, e.g., the650on the705.[21]In addition to simulators, IBM had compatibility features on the709and7090,[22]for which it provided the IBM 709 computer with a program to run legacy programs written for theIBM 704on the709and later on the IBM 7090. This program used the instructions added by the compatibility feature[23]to trap instructions requiring special handling; all other 704 instructions ran the same on a 7090. The compatibility feature on the1410[24]only required setting a console toggle switch, not a support program. In 1963, when microcode was first used to speed up this simulation process, IBM engineers coined the term "emulator" to describe the concept. In the 2000s, it has become common to use the word "emulate" in the context of software. However, before 1980, "emulation" referred only to emulation with a hardware or microcode assist, while "simulation" referred to pure software emulation.[25]For example, a computer specially built for running programs designed for another architecture is an emulator. In contrast, a simulator could be a program which runs on a PC, so that old Atari games can be simulated on it. Purists continue to insist on this distinction, but currently the term "emulation" often means the complete imitation of a machine executing binary code while "simulation" often refers tocomputer simulation, where a computer program is used to simulate an abstract model. Computer simulation is used in virtually every scientific and engineering domain and Computer Science is no exception, with several projects simulating abstract models of computer systems, such asnetwork simulation, which both practically and semantically differs from network emulation.[26] Hardware virtualization is thevirtualizationofcomputersas complete hardware platforms, certain logical abstractions of their components, or only the functionality required to run variousoperating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform.[27][28]At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.[29]Each hypervisor can manage or run multiplevirtual machines.
https://en.wikipedia.org/wiki/Emulator
Inabstract algebra, thegroup isomorphism problemis thedecision problemof determining whether two givenfinite group presentationsrefer toisomorphicgroups. The isomorphism problem was formulated byMax Dehn,[1]and together with theword problemandconjugacy problem, is one of three fundamental decision problems in group theory he identified in 1911.[2]All three problems, formulated as ranging over all finitely presented groups, areundecidable. In the case of the isomorphism problem, this means that there does not exist a computer algorithm that takes two finite group presentations and decides whether or not the groups are isomorphic, regardless of how (finitely) much time is allowed for the algorithm to run and how (finitely) much memory is available. In fact the problem of deciding whether a finitely presented group is trivial is undecidable,[3]a consequence of theAdian–Rabin theoremdue toSergei AdianandMichael O. Rabin. However, there are some classes of finitely presented groups for which the restriction of the isomorphism problem is known to be decidable. They includefinitely generated abelian groups,finite groups,Gromov-hyperbolic groups,[4]virtuallytorsion-freerelatively hyperbolic groupswithnilpotentparabolics,[5]one-relator groupswith non-trivialcenter,[6]and two-generator one-relator groups with torsion.[7] The group isomorphism problem, restricted to the groups that are given by multiplication tables, can be reduced to agraph isomorphism problembut not vice versa.[8]Both havequasi-polynomial-timealgorithms, the former since 1978 attributed toRobert Tarjan[9]and the latter since 2015 byLászló Babai.[10]A small but important improvement for the casep-groupsof class 2 was obtained in 2023 by Xiaorui Sun.[11][8] Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it. This article about thehistory of mathematicsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Group_isomorphism_problem
Managed private cloud(also known as "hosted private cloud" or "single-tenant SaaS") refers to a principle insoftware architecturewhere a single instance of the software runs on a server, serves a single clientorganization(tenant), and is managed by a third party. The third-party provider is responsible for providing the hardware for the server and also for preliminary maintenance. This is in contrast tomultitenancy, where multiple client organizations share a single server, or an on-premises deployment, where the client organization hosts its software instance. Managed private clouds also fall under the larger umbrella ofcloud computing. The need for private clouds arose due to enterprises requiring a dedicated service and infrastructure for their cloud computing needs, such as for business-critical operations, improved security, and better control over their resources. Managed private cloud adoption is a popular choice among organizations. It has been on the rise[1]due to enterprises requiring a dedicated cloud environment and preferring to avoid having to deal with management, maintenance, or future upgrade costs for the associated infrastructure and services. Such operational costs are unavoidable in on-premises private cloud data centers. A managed private cloud cuts down on upkeep costs by outsourcing infrastructure management and maintenance to the managed cloud provider. It is easier to integrate an organization's existing software, services, and applications into a dedicated cloud hosting infrastructure which can be customized to the client's needs instead of a public cloud platform, whose hardware or infrastructure/software platform cannot be individualized to each client.[2] Customers who choose a managed private cloud deployment usually choose them because of their desire for efficient cloud deployment, but also have the need for service customization or integration only available in a single-tenant environment. This chart shows the key benefits[3]of the different types of deployments, and shows the overlap between these cloud solutions. This chart shows key drawbacks. Since deployments are done in a single-tenant environment, it is usually cost-prohibitive for small and medium-sized businesses. While server upkeep and maintenance are handled by the service provider, including network management and security, the client is charged for all such services. It is up to the potential client to determine if a managed private cloud solution aligns with their business objectives and budget. While the service provider maintains the upkeep of servers, network, and platform infrastructure, sensitive data is typically not stored on managed private clouds as it may leave business-critical information prone to breaches via third-party attacks on the cloud service provider. Common customizations[4]and integrations include: Software companies have taken a variety of strategies in the Managed Private Cloud realm. Some software organizations have provided managed private cloud options internally, such asMicrosoft. Companies that offer an on-premises deployment option, by definition, enable third-party companies to market Managed Private Cloud solutions. A few managed private cloud service providers are:
https://en.wikipedia.org/wiki/Managed_private_cloud
The vulnerability ofJapanese naval codesand ciphers was crucial to the conduct ofWorld War II, and had an important influence on foreign relations between Japan and the west in the years leading up to the war as well. Every Japanese code was eventually broken, and the intelligence gathered made possible such operations as the victorious American ambush of the Japanese Navy atMidwayin 1942 (by breaking code JN-25b) and the shooting down of Japanese admiralIsoroku Yamamotoa year later inOperation Vengeance. TheImperial Japanese Navy(IJN) used manycodesandciphers. All of these cryptosystems were known differently by different organizations; the names listed below are those given by Western cryptanalytic operations. The Red Book code was an IJNcode booksystem used inWorld War Iand after. It was called "Red Book" because the American photographs made of it were bound in red covers.[1]It should not be confused with theRED cipherused by the diplomatic corps. This code consisted of two books. The first contained the code itself; the second contained an additive cipher which was applied to the codes before transmission, with the starting point for the latter being embedded in the transmitted message. A copy of the code book was obtained in a"black bag" operationon the luggage of a Japanese naval attaché in 1923; after three years of workAgnes Driscollwas able to break the additive portion of the code.[2][3][4] Knowledge of the Red Book code helped crack the similarly constructed Blue Book code.[1] A cipher machine developed for Japanese naval attaché ciphers, similar to JADE. It was not used extensively,[5][6]but Vice AdmiralKatsuo Abe, a Japanese representative to the Axis Tripartite Military Commission, passed considerable information about German deployments in CORAL, intelligence "essential for Allied military decision making in the European Theater."[7] A cipher machine used by the Imperial Japanese Navy from late 1942 to 1944 and similar to CORAL. A succession of codes used to communicate between Japanese naval installations. These were comparatively easily broken by British codebreakers in Singapore and are believed to have been the source of early indications of imminent naval war preparations.[8] The Fleet Auxiliary System, derived from theJN-40merchant-shipping code. Important for information on troop convoys and orders of battle. An inter-island cipher that provided valuable intelligence, especially when periodic changes to JN-25 temporarily blacked out U.S. decryption. JN-20 exploitation produced the "AF is short of water" message that established the main target of the Japanese Fleet, leading to a decisive U.S. victory at theBattle of Midwayin 1942.[9]: p.155 JN-25is the name given by codebreakers to the main, and most secure, command and control communications scheme used by the IJN during World War II.[10]Named as the 25th Japanese Navy system identified, it was initially given the designation AN-1 as a "research project" rather than a "current decryption" job. The project required reconstructing the meaning of thirty thousand code groups and piecing together thirty thousand random additives.[11] Introduced from 1 June 1939 to replace Blue (and the most recent descendant of the Red code),[12]it was an enciphered code, producing five-numeral groups for transmission. New code books andsuper-encipheringbooks were introduced from time to time, each new version requiring a more or less fresh cryptanalytic attack.John Tiltmanwith some help fromAlan Turing(at GCSB,Government Communications Security Bureau) had "solved" JN25 by 1941, i.e. they knew that it was a five-digit code with a codebook to translate words into five digits and there was a second "additive" book that the sender used to add to the original numbers "But knowing all this didn’t help them read a single message". By April 1942 JN25 was about 20 percent readable, so codebreakers could read "about one in five words" andtraffic analysiswas far more useful.[13]Tiltman had devised a (slow; neither easy nor quick) method of breaking it and had noted that all the numbers in the codebook were divisible by three.[14]"Breaking" rather than "solving" a code involves learning enough code words and indicators so that any given message can be read.[15] In particular, JN-25 was significantly changed on 1 December 1940 (JN25a);[12]and again on 4 December 1941 (JN25b),[16]just before theattack on Pearl Harbor. British, Australian, Dutch and American cryptanalysts co-operated on breaking JN-25 well before the Pearl Harbor attack, but because the Japanese Navy was not engaged in significant battle operations before then, there was little traffic available to use as raw material. Before then, IJN discussions and orders could generally travel by routes more secure than broadcast, such as courier or direct delivery by an IJN vessel. Publicly available accounts differ, but the most credible agree that the JN-25 version in use before December 1941 was not more than perhaps 10% broken at the time of the attack,[17]and that primarily in stripping away its super-encipherment. JN-25 traffic increased immensely with the outbreak of naval warfare at the end of 1941 and provided the cryptographic "depth" needed to succeed in substantially breaking the existing and subsequent versions of JN-25. The American effort was directed from Washington, D.C. by the U.S. Navy's signals intelligence command,OP-20-G; at Pearl Harbor it was centered at the Navy's Combat Intelligence Unit (Station HYPO, also known as COM 14),[18]led by CommanderJoseph Rochefort.[10]However, in 1942 not every cryptogram was decoded, as Japanese traffic was too heavy for the undermanned Combat Intelligence Unit.[19]With the assistance ofStation CAST(also known as COM 16, jointly commanded by Lts Rudolph Fabian and John Lietwiler)[20]in the Philippines, and the BritishFar East Combined Bureauin Singapore, and using apunched cardtabulating machinemanufactured byInternational Business Machines, a successful attack was mounted against the 4 December 1941 edition (JN25b). Together they made considerable progress by early 1942."Cribs"exploited common formalities in Japanese messages, such as "I have the honor to inform your excellency" (seeknown plaintext attack). Later versions of JN-25 were introduced: JN-25c from 28 May 1942, deferred from 1 April then 1 May; providing details of the attacks on Midway and Port Moresby. JN-25d was introduced from 1 April 1943, and while the additive had been changed, large portions had been recovered two weeks later, which provided details of Yamamoto's plans that were used inOperation Vengeance, the shooting-down of his plane.[21] This was a naval code used by merchant ships (commonly known as the "marucode"),[22]broken in May 1940. 28 May 1941, when thewhalefactory shipNisshin Maru No. 2 (1937)visited San Francisco,U.S. Customs ServiceAgent George Muller and Commander R. P. McCullough of the U.S. Navy's12th Naval District(responsible for the area) boarded her and seized her codebooks, without informingOffice of Naval Intelligence(ONI). Copies were made, clumsily, and the originals returned.[23]The Japanese quickly realized JN-39 was compromised, and replaced it with JN-40.[24] JN-40 was originally believed to be a code super-enciphered with a numerical additive, in the same way as JN-25. However, in September 1942, an error by the Japanese gave clues to John MacInnes and Brian Townend, codebreakers at the BritishFECB,Kilindini. It was afractionating transposition cipherbased on a substitution table of 100 groups of two figures each followed by acolumnar transposition. By November 1942, they were able to read all previous traffic and break each message as they received it. Enemy shipping, including troop convoys, was thus trackable, exposing it to Allied attack. Over the next two weeks they broke two more systems, the "previously impenetrable" JN167 and JN152.[24][25] The "minor operations code" often contained useful information on minor troop movements.[26] A simple transposition and substitution cipher used for broadcasting navigation warnings. In 1942 after breaking JN-40 the FECB at Kilindini broke JN-152 and the previously impenetrable JN-167, another merchant shipping cypher.[27][28] A merchant-shipping cipher (see JN-152). In June 1942 theChicago Tribune, run byisolationistCol. Robert R. McCormick, published an article implying that the United States had broken the Japanese codes, saying the U.S. Navy knew in advance about the Japanese attack on Midway Island, and published dispositions of the Japanese invasion fleet. The executive officer ofLexington, CommanderMorton T. Seligman(who was transferred to shore duties), had shown Nimitz's executive order to reporterStanley Johnston. The government at first wanted to prosecute theTribuneunder theEspionage Act of 1917. For various reasons, including the desire not tobring more attentionto the article and because the Espionage Act did not coverenemysecrets, the charges were dropped. A grand jury investigation did not result in prosecution but generated further publicity and, according toWalter Winchell, "tossed security out of the window". Several in Britain believed that their worst fears about American security were realized.[29] In early August, a RAN intercept unit in Melbourne (FRUMEL) heard Japanese messages, using a superseded lower-grade code. Changes were made to codebooks and the call-sign system, starting with the new JN-25c codebook (issued two months before). However the changes indicated the Japanese believed the Allies had worked out the fleet details from traffic analysis or had obtained a codebook and additive tables, being reluctant to believe that anyone could have broken their codes (least of all a Westerner).[30]
https://en.wikipedia.org/wiki/JN-25
Asocial networking service(SNS), orsocial networking site, is a type of onlinesocial mediaplatform which people use to buildsocial networksorsocial relationshipswith other people who share similar personal or career content, interests, activities, backgrounds or real-life connections.[1][2] Social networking services vary in format and the number of features. They can incorporate a range of new information and communication tools, operating ondesktopsand onlaptops, on mobile devices such astablet computersandsmartphones. This may feature digital photo/video/sharing and diary entries online (blogging).[2]Online communityservices are sometimes considered social-network services by developers and users, though in a broader sense, a social-network service usually provides an individual-centered service whereas online community services are groups centered. Generally defined as "websites that facilitate the building of a network of contacts in order to exchange various types of content online," social networking sites provide a space for interaction to continue beyond in-person interactions. These computer mediated interactions link members of various networks and may help to create, sustain and develop new social and professional relationships.[3] Social networking sites allow users to share ideas, digital photos and videos, posts, and to inform others about online or real-world activities and events with people within their social network. While in-person social networking – such as gathering in a village market to talk about events – has existed since the earliest development of towns,[4]the web enables people to connect with others who live in different locations across the globe (dependent on access to anInternet connectionto do so). Depending on the platform, members may be able to contact any other member. In other cases, members can contact anyone they have a connection to, and subsequently anyone that contact has a connection to, and so on. Facebookhaving a massive 2.13 billion active monthly users and an average of 1.4 billiondaily active usersin 2017.[5] LinkedIn, a career-oriented social-networking service, generally requires that a member personally know another member inreal lifebefore they contact them online. Some services require members to have a preexisting connection to contact other members. WithCOVID-19,Zoom, avideoconferencingplatform, has taken an integral place to connect people located around the world and facilitate many online environments such as school, university, work and government meetings. The main types of social networking services containcategory places(such as age or occupation or religion), means to connect with friends (usually with self-description pages), and a recommendation system linked to trust. One can categorize social-network services into four types:[6] There have been attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see theFOAFstandard). A study reveals thatIndiarecorded world's largest growth in terms of social media users in 2013.[7]A 2013 survey found that 73% of U.S. adults use social-networking sites.[8] The potential for computer networking to facilitate newly improved forms of computer-mediated social interaction was suggested early on.[30]Efforts to support social networks viacomputer-mediated communicationwere made in many early online services, includingUsenet,[31]ARPANET,LISTSERV, and bulletin board services (BBS). Many prototypical features of social networking sites were also present in online services such asThe Source,Delphi,America Online,Prodigy,CompuServe, andThe WELL.[32] Early social networking on theWorld Wide Webbegan in the form of generalized online communities such asTheglobe.com(1995),[33]Geocities(1994) andTripod.com(1995). Many of these early communities focused on bringing people together to interact with each other through chat rooms and encouraged users to share personal information and ideas via personal web pages by providing easy-to-use publishing tools and free or inexpensive web space. Some communities – such asClassmates.com– took a different approach by simply having people link to each other via email addresses.PlanetAllstarted in 1996. In the late 1990s,user profilesbecame a central feature of social networking sites, allowing users to compile lists of "friends" and search for other users with similar interests. New social networking methods were developed by the end of the 1990s, and many sites began to develop more advanced features for users to find and manage friends.[34]Open Diary, a community for online diarists, invented both friends-only content and the reader comment, two features of social networks important to user interaction.[35] This newer generation of social networking sites began to flourish with the emergence ofSixDegreesin 1997,[2]Open Diaryin 1998,[36]Mixiin 1999,[37]Makeoutclubin 2000,[38][39]Cyworldin 2001,[40][2]Hub Culturein 2002, andFriendsterandNexopiain 2003.[41]Cyworld also became one of the first companies to profit from the sale ofvirtual goods.[42][43]MySpaceandLinkedInwere launched in 2003, andBebowas launched in 2005.Orkutbecame the first popular social networking service in Brazil (although most of its very first users were from the United States) and quickly grew in popularity in India (Madhavan, 2007).[2]There was a rapid increase in social networking sites' popularity; in 2005, MySpace had morepageviewsthanGoogle.[44]Many of these services were displaced byFacebook, which launched in 2004 and became the largest social networking site in the world in 2009.[45][46] The termsocial mediawas first used in 2004 and is often used to describe social networking services.[47][48] Web-based social networking services make it possible to connect people who share interests and activities across political, economic, and geographic borders.[49]Through e-mail and instant messaging,online communitiesare created where agift economyandreciprocal altruismare encouraged throughcooperation. Information is suited to agift economy, as information is anonrival goodand can be gifted at practically no cost.[50][51]Scholars have noted that the term "social" cannot account for technological features of the social network platforms alone.[52]Hence, the level of network sociability should determine by the actual performances of its users. According to the communication theory of uses and gratifications, an increasing number of individuals are looking to the Internet and social media to fulfill cognitive, affective, personal integrative, social integrative, and tension free needs. With Internet technology as a supplement to fulfill needs, it is in turn affecting everyday life, including relationships, school, church, entertainment, and family.[53]Companies are using social media as a way to learn about potential employees' personalities and behavior. In numerous situations, a candidate who might otherwise have been hired has been rejected due to offensive or otherwise unseemly photos or comments posted to social networks or appearing on a newsfeed. Facebookand other social networking tools are increasingly the aims of scholarly research. Scholars in many fields have begun to investigate the impact of social networking sites, investigating how such sites may play into issues ofidentity, politics,privacy,[54]social capital,youth culture, andeducation.[55]Research has also suggested that individuals add offline friends onFacebookto maintain contact and often this blurs the lines between work and home lives.[56]Users from around the world also utilise social networking sites as an alternativenewssource.[57]While social networking sites have arguably changed how we access the news,[58]users tend to have mixed opinions about the reliability of content accessed through these sites.[59] According to a study in 2015, 63% of the users of Facebook or Twitter in the USA consider these networks to be their main source of news, with entertainment news being the most seen. In the times of breaking news, Twitter users are more likely to stay invested in the story. In some cases when the news story is more political, users may be more likely to voice their opinion on a linked Facebook story with a comment or like, while Twitter users will just follow the site's feed and retweet the article.[60]In online social networks, the veracity and reliability of news may be diminished due to the absence of traditional media gatekeepers.[61] A 2015 study shows that 85% of people aged 18 to 34 use social networking sites for their purchase decision making. While over 65% of people aged 55 and over-rely on word of mouth.[62]Several websites are beginning to tap into the power of the social networking model forphilanthropy. Such models provide a means for connecting otherwise fragmented industries and small organizations without the resources to reach a broader audience with interested users.[63]Social networks are providing a different way for individuals to communicate digitally. These communities of hypertexts allow for the sharing of information and ideas, an old concept placed in a digital environment. In 2011, HCL Technologies conducted research that showed that 50% of British employers had banned the use of social networking sites/services during office hours.[64][65] Research has provided us with mixed results as to whether or not a person's involvement in social networking can affect their feelings ofloneliness. Studies have indicated that how a person chooses to use social networking can change their feelings of loneliness in either a negative or positive way. Some companies with mobile workers have encouraged their workers to use social networking to feel connected. Educators are using social networking to stay connected with their students whereas individuals use it to stay connected with their close relationships.[66] Social networking sites can be used by consumers to create a social media firestorm which is "A digital artifact created by large numbers of user comments of multiple purposes (condemnation and support) and tones (aggressive and cordial) that appear rapidly and recede shortly after”.[1] Each social networking user is able to create a community that centers around a personal identity they choose to create online.[67]In his bookDigital Identities: Creating and Communicating the Online Self,[68]Rob Coverargues that social networking's foundation inWeb 2.0, high-speed networking shifts online representation to one which is both visual and relational to other people, complexifying the identity process for younger people and creating new forms ofanxiety.[68]In 2016, news reports stated that excessive usage of SNS sites may be associated with an increase in the rates of depression, to almost triple the rate for non-SNS users. Experts worldwide[which?]have said that 2030 people who use SNS more have higher levels of depression than those who use SNS less.[69]At least one study went as far as to conclude that the negative effects of Facebook usage are equal to or greater than the positive effects of face-to-face interactions.[70] According to a recent article fromComputers in Human Behavior, Facebook has also been shown to lead to issues of social comparison. Users are able to select which photos and status updates to post, allowing them to portray their lives in acclamatory manners.[71]These updates can lead to other users feeling like their lives are inferior by comparison.[72]Users may feel especially inclined to compare themselves to other users with whom they share similar characteristics or lifestyles, leading to a fairer comparison.[71]Motives for these comparisons can be associated with the goals of improving oneself by looking at profiles of people who one feels are superior, especially when their lifestyle is similar and possible.[71]One can also self-compare to make oneself feel superior to others by looking at the profiles of users who one believes to be worse off.[71]However, a study by the Harvard Business Review shows that these goals often lead to negative consequences, as use of Facebook has been linked with lower levels of well-being; mental health has been shown to decrease due to the use of Facebook.[72]Computers in Human Behavioremphasizes that these feelings of poor mental health have been suggested to cause people to take time off from their Facebook accounts; this action is called "Facebook Fatigue" and has been common in recent years.[71] Usage of social networking has contributed to a new form of abusive communication, and academic research has highlighted a number of social-technological explanations for this behaviour. These including the anonymity afforded by interpersonal communications,[73]factors that include boredom or attention seeking,[74]or the result of morepolarisedonline debate.[75]The impact in this abuse has found impacts through the prevalence of onlinecyberbullying, andonline trolling. There has also been a marked increase in political violence and abuse through social media platforms. For instance, one study by Ward and McLoughlin found that 2.57% of all messages sent toUK MPsonTwitterwere found to contain abusive messages.[75] According toboydandEllison's 2007 article, "Why Youth (Heart) Social Network Sites: The Role of Networked Publics in Teenage Social Life", social networking sites share a variety of technical features that allow individuals to: construct a public/semi-public profile, articulate a list of other users that they share a connection with, and view their list of connections within the system. The most basic of these are visible profiles with a list of "friends" who are also users of the site.[55]In an article entitled "Social Network Sites: Definition, History, and Scholarship," boyd and Ellison adopt Sunden's (2003) description of profiles as unique pages where one can "type oneself into being".[2]A profile is generated from answers to questions, such as age, location, interests, etc. Some sites allow users to upload pictures, add multimedia content or modify the look and feel of the profile. Others, e.g., Facebook, allow users to enhance their profile by adding modules or "Applications".[2]Many sites allow users to post blog entries, search for others with similar interests and compile and share lists of contacts. User profiles often have a section dedicated to comments from friends and other users. To protect user privacy, social networks typically have controls that allow users to choose who can view their profile, contact them, add them to their list of contacts, and so on.[citation needed] There is a trend towards moreinteroperability between social networksled by technologies such asOpenIDandOpenSocial. In most mobile communities, mobile phone users can now create their own profiles, make friends, participate in chat rooms, create chat rooms, hold private conversations, share photos and videos, and share blogs by using their mobile phone. Some companies provide wireless services that allow their customers to build their own mobile community and brand it; one of the most popular wireless services for social networking inNorth AmericaandNepalis Facebook Mobile. Recently, Twitter has also introduced fact check labels to combat misinformation which was primarily spread due to the coronavirus but also has had an impact on debunking false claims by Donald Trump in the 2020 election.[citation needed] Social media platforms may allow users to change theiruser name(or "handle", distinct from the "display name"), which could change theURLto their profile. Users are advised to do so with caution, since it could breakback linksfrom others' posts and comments depending on implementation, and external back links.[76] The things you share are things that make you look good, things which you are happy to tie into your identity. While the popularity of social networking consistently rises,[78]new uses for the technology are frequently being observed. Today's technologically savvy population requires convenient solutions to their daily needs.[79]At the forefront of emerging trends in social networking sites is the concept of "real-time web" and "location-based". Real-time allows users to contribute contents, which is then broadcast as it is being uploaded—the concept is analogous to live radio and television broadcasts.Twitterset the trend for "real-time" services, wherein users can broadcast to the world what they are doing, or what is on their minds within a 140-character limit.Facebookfollowed suit with their "Live Feed" where users' activities are streamed as soon as it happens. While Twitter focuses on words,Clixtr, another real-time service, focuses on group photo sharing wherein users can update their photo streams with photos while at an event. Facebook, however, remains the largest photo sharing site with over 250 billion photos as of September 2013.[80]In April 2012, the image-based social media networkPinteresthad become the third largest social network in the United States.[81] Companies have begun to merge business technologies and solutions, such ascloud computing, with social networking concepts. Instead of connecting individuals based on social interest, companies are developing interactive communities that connect individuals based on shared business needs or experiences. Many provide specialized networking tools andapplicationsthat can be accessed via their websites, such asLinkedIn. Others companies, such asMonster.com, have been steadily developing a more "socialized" feel to their career center sites to harness some of the power of social networking sites. These more business related sites have their own nomenclature for the most part but the most common naming conventions are "Vocational Networking Sites" or "Vocational Media Networks", with the former more closely tied to individual networking relationships based on social networking principles.[citation needed] Foursquaregained popularity as it allowed for users to check into places that they are frequenting at that moment.Gowallais another such service that functions in much the same way that Foursquare does, leveraging theGPSin phones to create a location-based user experience. Clixtr, though in the real-time space, is also a location-based social networking site, since events created by users are automatically geotagged, and users can view events occurring nearby through the ClixtriPhoneapp. Recently,Yelpannounced its entrance into the location-based social networking space through check-ins with their mobile app; whether or not this becomes detrimental to Foursquare or Gowalla is yet to be seen, as it is still considered a new space in the Internet technology industry.[82] One popular use for this new technology is social networking between businesses. Companies have found that social networking sites such as Facebook and Twitter are great ways to build their brand image. According to Jody Nimetz, author of Marketing Jive,[83]there are five major uses for businesses and social media: to create brand awareness, as an onlinereputation managementtool, for recruiting, to learn about new technologies and competitors, and as alead generationtool to intercept potential prospects.[83]These companies are able to drive traffic to their own online sites while encouraging their consumers and clients to have discussions on how to improve or change products or services. As of September 2013, 71% of online adults use Facebook, 17% use Instagram, 21% use Pinterest, and 22% use LinkedIn.[84] One other use that is being discussed is the use of social networks in the science communities. Julia Porter Liebeskind et al. have published a study on how new biotechnology firms are using social networking sites to share exchanges in scientific knowledge.[85]They state in their study that by sharing information and knowledge with one another, they are able to "increase both their learning and their flexibility in ways that would not have been possible within a self-contained hierarchical organization". Social networking is allowing scientific groups to expand their knowledge base and share ideas, and without these new means of communicating their theories might become "isolated and irrelevant". Researchers use social networks frequently to maintain and develop professional relationships.[86]They are interested in consolidating social ties and professional contact, keeping in touch with friends and colleagues and seeing what their own contacts are doing. This can be related to their need to keep updated on the activities and events of their friends and colleagues in order to establish collaborations on common fields of interest and knowledge sharing.[87] Social networks are also used to communicate scientists research results[88]and as a public communication tool and to connect people who share the same professional interests, their benefits can vary according to the discipline.[89]The most interesting aspects of social networks for professional purposes are their potentialities in terms of dissemination of information and the ability to reach and multiple professional contacts exponentially. Social networks likeAcademia.edu,LinkedIn,Facebook, andResearchGategive the possibility to join professional groups and pages, to share papers and results, publicize events, to discuss issues and create debates.[87]Academia.edu is extensively used by researchers, where they follow a combination of social networking and scholarly norms.[90]ResearchGate is also widely used by researchers, especially to disseminate and discuss their publications,[91]where it seems to attract an audience that it wider than just other scientists.[92]The usage of ResearchGate and Academia in different academic communities has increasingly been studied in recent years.[93] The advent of social networking platforms may also be impacting the ways in which learners engage with technology in general. For a number of years, Prensky's (2001) dichotomy betweenDigital Nativesand Digital Immigrants has been considered a relatively accurate representation of the ease with which people of a certain age range—in particular those born before and after 1980—use technology. Prensky's theory has been largely disproved, however, and not least on account of the burgeoning popularity of social networking sites and other metaphors such as White and Le Cornu's "Visitors" and "Residents" (2011) are greater currency. The use of online social networks by school libraries is also increasingly prevalent and they are being used to communicate with potential library users, as well as extending the services provided by individual school libraries. Social networks and their educational uses are of interest to many researchers. According to Livingstone and Brake (2010), "Social networking sites, like much else on the Internet, represent a moving target for researchers and policymakers."[94]Pew Research Center project, called Pew Internet, did a USA-wide survey in 2009 and in 2010 February published that 47% of American adults use a social networking website.[95]Same survey found that 73% of online teenagers use SNS, which is an increase from 65% in 2008, 55% in 2006.[95]Recent studies have shown that social network services provide opportunities within professional education, curriculum education, and learning. However, there are constraints in this area. Researches, especially in Africa, have disclosed that the use of social networks among students has been known to affect their academic life negatively. This is buttressed by the fact that their use constitutes distractions, as well as that the students tend to invest a good deal of time in the use of such technologies.[citation needed] Albayrak and Yildirim (2015) examined the educational use of social networking sites. They investigated students' involvement in Facebook as a Course Management System (CMS) and the findings of their study support that Facebook as a CMS has the potential to increase student involvement in discussions and out-of-class communication among instructors and students.[96] Professional use of social networking services refers to the employment of a network site to connect with other professionals within a given field of interest. These type of social networking services are referred to as "Career-oriented social networking markets (CSNM)".[9]LinkedInis one example and is a social networking website geared towards companies and industry professionals looking to make new business contacts or keep in touch with previous co-workers, affiliates, and clients. LinkedIn provides not only a professional social use but also encourages people to inject their personality into their profile – making it more personal than a resume.[97]Similar websites to LinkedIn (also geared towards companies and industry professionals looking for work opportunities) to connect includeAngelList,XING,Goodwall, The Dots,[98]Jobcase,Bark.com, ...[99]Variousfreelance marketplacewebsites (which focus on freelance work) also exist. There are also a number of otheremployment websitesfocused oninternational volunteering, notablyVolunteerMatch,Idealist.organdAll for Good.[100]NationalWWOOFnetworks finally allow for searching forhomestayson organic farms.[101] Now other social network sites are also being used in this manner.Twitterhas become [a] mainstay for professional development as well as promotion[102]and online SNSs support both the maintenance of existing social ties and the formation of new connections. Much of the early research on online communities assume that individuals using these systems would be connecting with others outside their preexisting social group or location, liberating them to form communities around shared interests, as opposed to shared geography.[103]Other researchers have suggested that the professional use of network sites produce "social capital". For individuals, social capital allows a person to draw on resources from other members of the networks to which he or she belongs.[104]These resources can take the form of useful information, personal relationships, or the capacity to organize groups. As well, networks within these services also can be established or built by joining special interest groups that others have made, or creating one and asking others to join.[105] According to Doering, Beach, and O'Brien, a future English curriculum needs to recognize a significant shift in how adolescents are communicating with each other.[106]Curriculum uses of social networking services can also include sharing curriculum-related resources. Educators tap into user-generated content to find and discuss curriculum-related content for students. Responding to the popularity of social networking services among many students, teachers are increasingly using social networks to supplement teaching and learning in traditional classroom environments. This way they can provide new opportunities for enriching existing curriculum through creative, authentic and flexible, non-linear learning experiences.[107]Some social networks, such asEnglish, baby!andLiveMocha, are explicitly education-focused and couple instructional content with an educational peer environment.[108]The newWeb 2.0technologies built into most social networking services promote conferencing, interaction, creation, research on a global scale, enabling educators to share, remix, and repurpose curriculum resources. In short, social networking services can become research networks as well aslearning networks.[109] Educators and advocates of newdigital literaciesare confident that social networking encourages the development of transferable, technical, and social skills of value in formal and informal learning.[94]In a formal learning environment, goals or objectives are determined by an outside department or agency.Tweeting,instant messaging, orbloggingenhances student involvement. Students who would not normally participate in class are more apt to partake through social network services. Networking allows participants the opportunity for just-in-time learning and higher levels of engagement.[110]The use of SNSs allow educators to enhance the prescribed curriculum. When learning experiences are infused into a website student utilize every day for fun, students realize that learning can and should be a part of everyday life.[111]It does not have to be separate and unattached.[112][unreliable source?] Informal learning consists of the learner setting the goals and objectives. It has been claimed that media no longer just influence human culture; they are human culture.[113]With such a high number of users between the ages of 13 and 18, a number of skills are developed. Participants hone technical skills in choosing to navigate through social networking services. This includes elementary items such as sending an instant message or updating a status. The development of new media skills are paramount in helping youth navigate the digital world with confidence. Social networking services foster learning through whatJenkins(2006) describes as a "participatory culture".[114]A participatory culture consists of a space that allows engagement, sharing, mentoring, and an opportunity for social interaction. Participants of social network services avail of this opportunity. Informal learning, in the forms of participatory and social learning online, is an excellent tool for teachers to sneak in material and ideas that students will identify with and therefore, in a secondary manner, students will learn skills that would normally be taught in a formal setting in the more interesting and engaging environment of social learning.[115][unreliable source?]Sites like Twitter provide students with the opportunity to converse and collaborate with others in real time.[citation needed] Social networking services provide a virtual "space" for learners.James Gee(2004) suggests thataffinity spacesinstantiate participation, collaboration, distribution, dispersion of expertise, and relatedness.[116]Registered users share and search for knowledge which contributes to informal learning.[citation needed] In the past, social networking services were viewed as a distraction and offered no educational benefit. Blocking these social networks was a form of protection for students against wasting time, bullying, and invasions of privacy. In an educational setting, Facebook, for example, is seen by many instructors and educators as a frivolous, time-wasting distraction from schoolwork, and it is not uncommon to be banned in junior high or high school computer labs.[112]Cyberbullyinghas become an issue of concern with social networking services. According to the UK Children Go Online survey of 9- to 19-year-olds, it was found that a third have received bullying comments online.[117]To avoid this problem, many school districts/boards have blocked access to social networking services such as Facebook, MySpace, and Twitter within the school environment. Social networking services often include a lot of personal information posted publicly, and many believe that sharing personal information is a window into privacy theft. Schools have taken action to protect students from this. It is believed that this outpouring of identifiable information and the easy communication vehicle that social networking services open the door to sexual predators, cyberbullying, andcyberstalking.[118]In contrast, however, 70% of social media using teens and 85% of adults believe that people are mostly kind to one another on social network sites.[95] Recent research suggests that there has been a shift in blocking the use of social networking services. In many cases, the opposite is occurring as the potential of online networking services is being realized. It has been suggested that if schools block them [social networking services], they are preventing students from learning the skills they need.[119]Banning social networking [...] is not only inappropriate but also borderline irresponsible when it comes to providing the best educational experiences for students.[120]Schools and school districts have the option of educating safe media usage as well as incorporatingdigital mediainto the classroom experience, thus preparing students for the literacy they will encounter in the future.[citation needed] A cyberpsychology research study conducted by Australian researchers demonstrated that a number of positive psychological outcomes are related to Facebook use. These researchers established that people can derive a sense of social connectedness and belongingness in the online environment. Importantly, this online social connectedness was associated with lower levels of depression and anxiety, and greater levels of subjective well-being. These findings suggest that the nature of online social networking determines the outcomes of online social network use.[121][122] Social networks are being used by activists as a means of low-cost grassroots organizing. Extensive use of an array of social networking sites enabled organizers of 2009National Equality Marchto mobilize an estimated 200,000 participants to march on Washington with a cost savings of up to 85% per participant over previous methods.[123]The August2011 England riotswere similarly considered to have escalated and been fuelled by this type of grassroots organization.[citation needed] A rise in social network use is being driven by college students using the services to network with professionals for internship and job opportunities. Many studies have been done on the effectiveness of networking online in a college setting, and one notable one is by Phipps Arabie and Yoram Wind published inAdvances in Social Network Analysis.[124]Many schools have implemented online alumni directories which serve as makeshift social networks that current and former students can turn to for career advice. However, these alumni directories tend to suffer from an oversupply of advice-seekers and an undersupply of advice providers. One new social networking service, Ask-a-peer, aims to solve this problem by enabling advice seekers to offer modest compensation to advisers for their time. LinkedIn is also another great resource. It helps alumni, students and unemployed individuals look for work. They are also able to connect with others professionally and network with companies. In addition, employers have been found to use social network sites to screen job candidates.[125] Asocial network hosting serviceis a web hosting service that specifically hosts the user creation of web-based social networking services, alongside related applications.[citation needed] A social trade network is a service that allows participants interested in specific trade sectors to share related contents and personal opinions.[citation needed] Few social networks charge money for membership. In part, this may be because social networking is a relatively new service, and the value of using them has not been firmly established in customers' minds. Companies such asMyspaceandFacebooksellonline advertisingon their site. Their business model is based upon large membership count, and charging for membership would be counterproductive.[126]Some believe that the deeper information that the sites have on each user will allow much better targeted advertising than any other site can currently provide.[127]In recent times, Apple has been critical of the Google and Facebook model, in which users are defined as product and a commodity, and their data being sold for marketing revenue.[128]Social networks operate under an autonomous business model, in which a social network's members serve dual roles as both the suppliers and the consumers of content. This is in contrast to a traditional business model, where the suppliers and consumers are distinct agents. Revenue is typically gained in the autonomous business model via advertisements, but subscription-based revenue is possible when membership and content levels are sufficiently high.[129] People use social networking sites for meeting new friends, finding old friends, or locating people who have the same problems or interests they have, called niche networking. More and more relationships and friendships are being formed online and then carried to an offline setting. Psychologist and University of Hamburg professor Erich H. Witte says that relationships which start online are much more likely to succeed. In this regard, there are studies which predict tie strength among the friends[130]on social networking websites. One online dating site claims that 2% of all marriages begin at its site, the equivalent of 236 marriages a day. Other sites claim one in five relationships begin online.[citation needed] Users do not necessarily share with others the content which is of most interest to them, but rather that which projects a good impression of themselves.[77]While everyone agrees that social networking has had a significant impact on social interaction, there remains a substantial disagreement as to whether the nature of this impact is completely positive. A number of scholars have done research on the negative effects of Internet communication as well. These researchers have contended that this form of communication is an impoverished version of conventional face-to-face social interactions, and therefore produce negative outcomes such as loneliness and depression for users who rely on social networking entirely. By engaging solely in online communication, interactions between communities, families, and other social groups are weakened.[131] Social networking services have led to manyissues regarding privacy, bullying, social anxiety and potential for misuse. Social networking services are increasingly being used in legal andcriminal investigations. The information posted on sites such as MySpace and Facebook has been used by police (forensic profiling), probation, and university officials to prosecute users of said sites. In some situations, content posted on MySpace has been used in court.[132] Facebook is increasingly being used by school administrations and law enforcement agencies as a source of evidence against student users. This site being the number one online destination for college students, allows users to create profile pages with personal details. These pages can be viewed by other registered users from the same school, which often include resident assistants and campus police who have signed up for the service.[133]OneUKpolice force has sifted pictures from Facebook and arrested some people who had been photographed in a public place holding a weapon such as a knife (having a weapon in a public place is illegal).[134] Social networking is more recently being used by various government agencies. Social networking tools serve as a quick and easy way for the government to get the suggestion of the public and to keep the public updated on their activity, however, this comes with a significant risk of abuse, for example, to cultivate aculture of fearsuch as that outlined inNineteen Eighty-FourorTHX-1138. TheCenters for Disease Controldemonstrated the importance ofvaccinationson the popular children's siteWhyvilleand theNational Oceanic and Atmospheric Administrationhas a virtual island onSecond Lifewhere people can explore caves or explore theeffects of global warming.[135]Likewise, NASA has taken advantage of a few social networking tools, includingTwitterandFlickr. The NSA is taking advantage of them all.[136]NASA is using such tools to aid theReview of U.S. Human Space Flight Plans Committee, whose goal it is toensure that the nation is on a vigorous and sustainable path to achieving its boldest aspirations in space.[137] The use of social networking services in an enterprise context presents the potential of having a major impact on the world of business and work.[138]Social networks connect people at low cost; this can be beneficial forentrepreneursandsmall businesseslooking to expand their contact bases. These networks often act as a customer relationship management tool for companies selling products and services. Companies can also use social networks for advertising in the form of banners and text ads. Since businesses operate globally, social networks can make it easier to keep in touch with contacts around the world. Applications for social networking sites have extended toward businesses and brands are creating their own, high functioning sites, a sector known asbrand networking. It is the idea that a brand can build its consumer relationship by connecting their consumers to the brand image on a platform that provides them relative content, elements of participation, and a ranking or score system. Brand networking is a new way to capitalize on social trends as a marketing tool. The power of social networks is beginning to permeate into internal culture of businesses where they are finding uses forcollaboration,file sharingandknowledge transfer. The term "enterprise social software" is becoming increasingly popular for these types of applications.[citation needed] Many social networks provide an online environment for people to communicate and exchange personal information for dating purposes. Intentions can vary from looking for a one time date, short-term relationships, and long-term relationships.[139]Most of these social networks, just like online dating services, require users to give out certain pieces of information. This usually includes a user's age, gender, location, interests, and perhaps a picture. Releasing very personal information is usually discouraged for safety reasons.[140]This allows other users to search or be searched by some sort of criteria, but at the same time, people can maintain a degree of anonymity similar to most online dating services. Online dating sites are similar to social networks in the sense that users create profiles to meet and communicate with others, but their activities on such sites are for the sole purpose of finding a person of interest to date. Social networks do not necessarily have to be for dating; many users simply use it for keeping in touch with friends, and colleagues.[141] However, an important difference between social networks and online dating services is the fact that online dating sites usually require a fee, where social networks are free.[142]This difference is one of the reasons the online dating industry is seeing a massive decrease in revenue due to many users opting to use social networking services instead. Many popular online dating services such asMatch.com,Yahoo Personals, andeHarmony.comare seeing a decrease in users, where social networks likeMySpaceand Facebook are experiencing an increase in users. The number of Internet users in the United States that visit online dating sites has fallen from a peak of 21% in 2003 to 10% in 2006.[143]Whether it is the cost of the services, the variety of users with different intentions, or any other reason, it is undeniable that social networking sites are quickly becoming the new way to find dates online.[citation needed] TheNational School Boards Associationreports that almost 60% of students who use social networking talk about education topics online, and more than 50% talk specifically about schoolwork. Yet the vast majority of school districts have stringent rules against nearly all forms of social networking during the school day—even though students and parents report few problem behaviors online. Social networks focused on supporting relationships between teachers and their students are now used for learning, educators professional development, and content sharing.HASTACis a collaborative social network space for new modes of learning and research in higher education, K-12, and lifelong learning;Ningsupports teachers;TermWiki,TeachStreetand other sites are being built to foster relationships that include educational blogs, portfolios, formal and ad hoc communities, as well as communication such as chats, discussion threads, and synchronous forums. These sites also have content sharing and rating features. Social networks are also emerging as onlineyearbooks, both public and private. One such service isMyYearbook, which allows anyone from the general public to register and connect. A new trend emerging is private label yearbooks accessible only by students, parents, and teachers of a particular school, similar toFacebook's beginning within Harvard.[citation needed] The use ofvirtual currencysystems inside social networks create new opportunities for global finance.Hub Cultureoperates a virtual currencyVenused for global transactions among members, product sales[144]and financial trades in commodities and carbon credits.[145][146]In May 2010,carbon pricingcontracts were introduced to the weighted basket of currencies and commodities that determine the floating exchange value of Ven. The introduction of carbon to the calculation price of the currency made Ven the first and only currency that is linked to the environment.[147] Social networks are beginning to be adopted by healthcare professionals as a means to manage institutional knowledge, disseminate peer to peer knowledge and to highlight individual physicians and institutions. The advantage of using a dedicated medical social networking site is that all the members are screened against the state licensing board list of practitioners.[148]A new trend is emerging with social networks created to help its members with various physical and mental ailments.[149]For people suffering from life-altering diseases or chronic health conditions, companies such asHealthUnlockedandPatientsLikeMeoffers their members the chance to connect with others dealing with similar issues and share experiences. For alcoholics and addicts, SoberCircle gives people in recovery the ability to communicate with one another and strengthen their recovery through the encouragement of others who can relate to their situation.DailyStrengthis also a website that offers support groups for a wide array of topics and conditions, including the support topics offered byPatientsLikeMeand SoberCircle. Some social networks aim to encourage healthy lifestyles in their users.SparkPeopleandHealthUnlockedoffer community and social networking tools for peer support during weight loss.FitocracyandQUENTIQare focused on exercise, enabling users to share their own workouts and comment on those of other users. Other aspects of social network usage include the analysis of data coming from existing social networks (such as Twitter) to discover large crowd concentration events (based on tweets location statistical analysis) and disseminate the information to e.g. mobility-challenged individuals for e.g. avoiding the specific areas and optimizing their journey in an urban environment.[150] Social networking sites have recently showed a value in social and political movements.[151]In theEgyptian revolution,FacebookandTwitterboth played an allegedly pivotal role in keeping people connected to the revolt.Egyptian activistshave credited social networking sites with providing a platform for planning protest and sharing news fromTahrir Squarein real time. By presenting a platform for thousands of people to instantaneously share videos of mainly events featuring brutality, social networking can be a vital tool in revolutions.[152]On the flip side, social networks enable government authorities to easily identify, and repress, protestors and dissidents.[153]Another political application of social media is promoting the involvement of younger generations in politics and ongoing political issues.[154] Perhaps the most significant political application of social media isBarack Obama's election campaign in 2008. It was the first of its kind, as it successfully incorporated social media into its campaign winning strategy, evolving the way of political campaigns forevermore in the ever-changing technological world we find ourselves in today. His campaign won by engaging everyday people and empowering volunteers, donors, and advocates, through social networks, text messaging, email messaging and online videos.[155]Obama's social media campaign was vast, with his campaign boasting 5 million 'friends' on over 15 social networking sites, with over 3 million friends just on Facebook.[156]Another significant success of the campaign was online videos, with nearly 2,000 YouTube videos being put online, receiving over 80 million views.[156] In 2007, when Obama first announced his candidacy, there was no such thing as an iPhone or Twitter. However, a year later, Obama was sending out voting reminders to thousands of people through Twitter, showing just how fast social media moves. Obama's campaign was current and needed to be successful in incorporating social media, as social media acts best and is most effective in real time.[157] Building up to the 2012 presidential election, it was interesting to see how strong the influence of social media would be following the 2008 campaigns, where Obama's winning campaign had been social media-heavy, whereas McCain's campaign did not really grasp social media.John F. Kennedywas the first president who really understood television, and similarly, Obama is the first president to fully understand the power of social media.[158]Obama has recognized social media is about creating relationships and connections and therefore used social media to the advantage of presidential election campaigns, in which Obama has dominated his opponents in terms of social media space. Other political campaigns have followed on from Obama's successful social media campaigns, recognizing the power of social media and incorporating it as a key factor embedded within their political campaigns, for example, Donald Trump's presidential electoral campaign, 2016. Dan Pfeiffer, Obama's former digital and social media guru, commented that Donald Trump is "way better at the internet than anyone else in the GOP which is partly why he is winning".[159] Research has shown that 66% of social media users actively engage in political activity online, and like many other behaviors, online activities translate into offline ones.[158]With research from the 'MacArthur Research Network on Youth and Participatory Politics' stating that young people who are politically active online are double as likely to vote than those who are not politically active online.[158]Therefore, political applications of social networking sites are crucial, particularly to engage with the youth, who perhaps are the least educated in politics and the most in social networking sites. Social media is, therefore, a very effective way in which politicians can connect with a younger audience through their political campaigns.[160] On June 28, 2020,The New York Timesreleased an article sharing the finding of two researchers who studied the impact ofTikTok, a video-sharing and social networking application, on political expression. The application, besides being a creative space to express oneself, has been used maliciously to spread disinformation ahead of US PresidentDonald Trump's Tulsa rally in Oklahoma and amplified footage of police brutality atBlack Lives Matterprotests.[161] Crowdsourcing social media platform, such asDesign Contest,Arcbazar,Tongal, combined group of professionalfreelancers, such asdesigners, and help them communicate with business owners interested in their suggestion. This process is often used to subdivide tedious work or to fund-raise startup companies and charities, and can also occur offline.[162] There are a number of projects that aim to developfree and open source softwareto use for social networking services. These technologies are often referred to as social engine or social networking engine software. The following is a list of the largest social networking services, in order by number of active users, as of January 2024, as published byStatista:[163] *Platforms have not published updated user figures in the past 12 months, figures may be out of date and less reliable**Figure uses daily active users, so monthly active user number is likely higher
https://en.wikipedia.org/wiki/Issues_involving_social_networking_services
Forparsing algorithmsincomputer science, theinside–outside algorithmis a way of re-estimating production probabilities in aprobabilistic context-free grammar. It was introduced byJames K. Bakerin 1979 as a generalization of theforward–backward algorithmfor parameter estimation onhidden Markov modelstostochastic context-free grammars. It is used to compute expectations, for example as part of theexpectation–maximization algorithm(an unsupervised learning algorithm). The inside probabilityβj(p,q){\displaystyle \beta _{j}(p,q)}is the total probability of generating wordswp⋯wq{\displaystyle w_{p}\cdots w_{q}}, given the root nonterminalNj{\displaystyle N^{j}}and a grammarG{\displaystyle G}:[1] The outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is the total probability of beginning with the start symbolN1{\displaystyle N^{1}}and generating the nonterminalNpqj{\displaystyle N_{pq}^{j}}and all the words outsidewp⋯wq{\displaystyle w_{p}\cdots w_{q}}, given a grammarG{\displaystyle G}:[1] Base Case: βj(p,p)=P(wp|Nj,G){\displaystyle \beta _{j}(p,p)=P(w_{p}|N^{j},G)} General case: Suppose there is a ruleNj→NrNs{\displaystyle N_{j}\rightarrow N_{r}N_{s}}in the grammar, then the probability of generatingwp⋯wq{\displaystyle w_{p}\cdots w_{q}}starting with a subtree rooted atNj{\displaystyle N_{j}}is: ∑k=pk=q−1P(Nj→NrNs)βr(p,k)βs(k+1,q){\displaystyle \sum _{k=p}^{k=q-1}P(N_{j}\rightarrow N_{r}N_{s})\beta _{r}(p,k)\beta _{s}(k+1,q)} The inside probabilityβj(p,q){\displaystyle \beta _{j}(p,q)}is just the sum over all such possible rules: βj(p,q)=∑Nr,Ns∑k=pk=q−1P(Nj→NrNs)βr(p,k)βs(k+1,q){\displaystyle \beta _{j}(p,q)=\sum _{N_{r},N_{s}}\sum _{k=p}^{k=q-1}P(N_{j}\rightarrow N_{r}N_{s})\beta _{r}(p,k)\beta _{s}(k+1,q)} Base Case: αj(1,n)={1ifj=10otherwise{\displaystyle \alpha _{j}(1,n)={\begin{cases}1&{\mbox{if }}j=1\\0&{\mbox{otherwise}}\end{cases}}} Here the start symbol isN1{\displaystyle N_{1}}. General case: Suppose there is a ruleNr→NjNs{\displaystyle N_{r}\rightarrow N_{j}N_{s}}in the grammar that generatesNj{\displaystyle N_{j}}. Then theleftcontribution of that rule to the outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is: ∑k=q+1k=nP(Nr→NjNs)αr(p,k)βs(q+1,k){\displaystyle \sum _{k=q+1}^{k=n}P(N_{r}\rightarrow N_{j}N_{s})\alpha _{r}(p,k)\beta _{s}(q+1,k)} Now suppose there is a ruleNr→NsNj{\displaystyle N_{r}\rightarrow N_{s}N_{j}}in the grammar. Then therightcontribution of that rule to the outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is: ∑k=1k=p−1P(Nr→NsNj)αr(k,q)βs(k,p−1){\displaystyle \sum _{k=1}^{k=p-1}P(N_{r}\rightarrow N_{s}N_{j})\alpha _{r}(k,q)\beta _{s}(k,p-1)} The outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is the sum of the left and right contributions over all such rules: αj(p,q)=∑Nr,Ns∑k=q+1k=nP(Nr→NjNs)αr(p,k)βs(q+1,k)+∑Nr,Ns∑k=1k=p−1P(Nr→NsNj)αr(k,q)βs(k,p−1){\displaystyle \alpha _{j}(p,q)=\sum _{N_{r},N_{s}}\sum _{k=q+1}^{k=n}P(N_{r}\rightarrow N_{j}N_{s})\alpha _{r}(p,k)\beta _{s}(q+1,k)+\sum _{N_{r},N_{s}}\sum _{k=1}^{k=p-1}P(N_{r}\rightarrow N_{s}N_{j})\alpha _{r}(k,q)\beta _{s}(k,p-1)}
https://en.wikipedia.org/wiki/Inside%E2%80%93outside_algorithm
This is a list of computing and IT acronyms, initialisms and abbreviations.
https://en.wikipedia.org/wiki/List_of_computing_and_IT_abbreviations
Modular programmingis asoftware designtechnique that emphasizes separating the functionality of aprograminto independent, interchangeablemodules, such that each contains everything necessary to execute only one aspect or"concern"of the desired functionality. A moduleinterfaceexpresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. Theimplementationcontains the working code that corresponds to the elements declared in the interface. Modular programming is closely related tostructured programmingandobject-oriented programming, all having the same goal of facilitating construction of large software programs and systems bydecompositioninto smaller pieces, and all originating around the 1960s. While the historical usage of these terms has been inconsistent, "modular programming" now refers to the high-level decomposition of the code of an entire program into pieces: structured programming to the low-level code use of structuredcontrol flow, and object-oriented programming to thedatause ofobjects, a kind ofdata structure. In object-oriented programming, the use of interfaces as an architectural pattern to construct modules is known asinterface-based programming.[citation needed] Modular programming, in the form of subsystems (particularly for I/O) and software libraries, dates to early software systems, where it was used forcode reuse. Modular programming per se, with a goal of modularity, developed in the late 1960s and 1970s, as a larger-scale analog of the concept ofstructured programming(1960s). The term "modular programming" dates at least to the National Symposium on Modular Programming, organized at the Information and Systems Institute in July 1968 byLarry Constantine; other key concepts wereinformation hiding(1972) andseparation of concerns(SoC, 1974). Modules were not included in the original specification forALGOL 68(1968), but were included as extensions in early implementations,ALGOL 68-R(1970) andALGOL 68C(1970), and later formalized.[1]One of the first languages designed from the start for modular programming was the short-livedModula(1975), byNiklaus Wirth. Another early modular language wasMesa(1970s), byXerox PARC, and Wirth drew on Mesa as well as the original Modula in its successor,Modula-2(1978), which influenced later languages, particularly through its successor,Modula-3(1980s). Modula's use of dot-qualified names, likeM.ato refer to objectafrom moduleM, coincides with notation to access a field of a record (and similarly for attributes or methods of objects), and is now widespread, seen inC++,C#,Dart,Go,Java,OCaml, andPython, among others. Modular programming became widespread from the 1980s: the originalPascallanguage (1970) did not include modules, but later versions, notablyUCSD Pascal(1978) andTurbo Pascal(1983) included them in the form of "units", as did the Pascal-influencedAda(1980). The Extended Pascal ISO 10206:1990 standard kept closer to Modula2 in its modular support.Standard ML(1984)[2]has one of the most complete module systems, includingfunctors(parameterized modules) to map between modules. In the 1980s and 1990s, modular programming was overshadowed by and often conflated withobject-oriented programming, particularly due to the popularity of C++ and Java. For example, the C family of languages had support for objects and classes in C++ (originallyC with Classes, 1980) and Objective-C (1983), only supporting modules 30 years or more later. Java (1995) supports modules in the form of packages, though the primary unit of code organization is a class. However, Python (1991) prominently used both modules and objects from the start, using modules as the primary unit of code organization and "packages" as a larger-scale unit; andPerl 5(1994) includes support for both modules and objects, with a vast array of modules being available fromCPAN(1993).OCaml(1996) followed ML by supporting modules and functors. Modular programming is now widespread, and found in virtually all major languages developed since the 1990s. The relative importance of modules varies between languages, and in class-based object-oriented languages there is still overlap and confusion with classes as a unit of organization and encapsulation, but these are both well-established as distinct concepts. The termassembly(as in.NET languageslikeC#,F#orVisual Basic .NET) orpackage(as inDart,GoorJava) is sometimes used instead ofmodule. In other implementations, these are distinct concepts; inPythona package is a collection of modules, while inJava 9the introduction of thenew module concept(a collection of packages with enhanced access control) was implemented. Furthermore, the term "package" has other uses in software (for example.NET NuGet packages). Acomponentis a similar concept, but typically refers to a higher level; a component is a piece of a wholesystem, while a module is a piece of an individual program. The scale of the term "module" varies significantly between languages; in Python it is very small-scale and each file is a module, while inJava 9it is planned to be large-scale, where a module is a collection of packages, which are in turn collections of files. Other terms for modules includeunit, used inPascaldialects. Languages that formally support the module concept includeAda,ALGOL,BlitzMax,C++,C#,Clojure,COBOL,Common Lisp,D,Dart, eC,Erlang,Elixir,Elm,F,F#,Fortran,Go,Haskell,IBM/360Assembler,Control Language(CL),IBM RPG,Java,Julia,MATLAB,ML,Modula,Modula-2,Modula-3, Morpho,NEWP,Oberon,Oberon-2,Objective-C,OCaml, severalPascalderivatives (Component Pascal,Object Pascal,Turbo Pascal,UCSD Pascal),Perl,PHP,PL/I,PureBasic,Python,R,Ruby,[3]Rust,JavaScript,[4]Visual Basic (.NET)and WebDNA. In the Java programming language, the term "package" is used for the analog of modules in the JLS;[5]— seeJava package. "Modules", a kind of collection of packages, were introduced inJava 9as part ofProject Jigsaw; these were earlier called "superpackages" were planned for Java 7. Conspicuous examples of languages that lack support for modules areCand have beenC++and Pascal in their original form,CandC++do, however, allow separate compilation and declarative interfaces to be specified usingheader files. Modules were added to Objective-C iniOS 7(2013); to C++ withC++20,[6]and Pascal was superseded byModulaandOberon, which included modules from the start, and various derivatives that included modules.JavaScripthas had native modules sinceECMAScript2015.C++ moduleshave allowed backwards compatibility with headers (with "header units"). Dialects of C allow for modules, for exampleClangsupports modules for the C language,[7]though the syntax and semantics of Clang C modules differ from C++ modules significantly. Modular programming can be performed even where the programming language lacks explicit syntactic features to support named modules, like, for example, in C. This is done by using existing language features, together with, for example,coding conventions,programming idiomsand the physical code structure.IBM ialso uses modules when programming in theIntegrated Language Environment(ILE). With modular programming,concerns are separatedsuch that modules perform logically discrete functions, interacting through well-defined interfaces. Often modules form adirected acyclic graph(DAG); in this case a cyclic dependency between modules is seen as indicating that these should be a single module. In the case where modules do form a DAG they can be arranged as a hierarchy, where the lowest-level modules are independent, depending on no other modules, and higher-level modules depend on lower-level ones. A particular program or library is a top-level module of its own hierarchy, but can in turn be seen as a lower-level module of a higher-level program, library, or system. When creating a modular system, instead of creating a monolithic application (where the smallest component is the whole), several smaller modules are written separately so when they are composed together, they construct the executable application program. Typically, these are alsocompiledseparately, viaseparate compilation, and then linked by alinker. Ajust-in-time compilermay perform some of this construction "on-the-fly" atrun time. These independent functions are commonly classified as either program control functions or specific task functions. Program control functions are designed to work for one program. Specific task functions are closely prepared to be applicable for various programs. This makes modular designed systems, if built correctly, far more reusable than a traditional monolithic design, since all (or many) of these modules may then be reused (without change) in other projects. This also facilitates the "breaking down" of projects into several smaller projects. Theoretically, a modularized software project will be more easily assembled by large teams, since no team members are creating the whole system, or even need to know about the system as a whole. They can focus just on the assigned smaller task.
https://en.wikipedia.org/wiki/Modular_programming
In statistics,nonlinear regressionis a form ofregression analysisin which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations). In nonlinear regression, astatistical modelof the form, y∼f(x,β){\displaystyle \mathbf {y} \sim f(\mathbf {x} ,{\boldsymbol {\beta }})} relates a vector ofindependent variables,x{\displaystyle \mathbf {x} }, and its associated observeddependent variables,y{\displaystyle \mathbf {y} }. The functionf{\displaystyle f}is nonlinear in the components of the vector of parametersβ{\displaystyle \beta }, but otherwise arbitrary. For example, theMichaelis–Mentenmodel for enzyme kinetics has two parameters and one independent variable, related byf{\displaystyle f}by:[a] f(x,β)=β1xβ2+x{\displaystyle f(x,{\boldsymbol {\beta }})={\frac {\beta _{1}x}{\beta _{2}+x}}} This function, which is a rectangular hyperbola, isnonlinearbecause it cannot be expressed as alinear combinationof the twoβ{\displaystyle \beta }s. Systematic errormay be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is anerrors-in-variables model, also outside this scope. Other examples of nonlinear functions includeexponential functions,logarithmic functions,trigonometric functions,power functions,Gaussian function, andLorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See§ Linearization §§ Transformation, below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is inlinear regression. Usually numericaloptimizationalgorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be manylocal minimaof the function to be optimized and even the global minimum may produce abiasedestimate. In practice,estimated valuesof the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling seeleast squaresandnon-linear least squares. The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-orderTaylor series: f(xi,β)≈f(xi,0)+∑jJijβj{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},0)+\sum _{j}J_{ij}\beta _{j}} whereJij=∂f(xi,β)∂βj{\displaystyle J_{ij}={\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}}are Jacobian matrix elements. It follows from this that the least squares estimators are given by β^≈(JTJ)−1JTy,{\displaystyle {\hat {\boldsymbol {\beta }}}\approx \mathbf {(J^{T}J)^{-1}J^{T}y} ,}comparegeneralized least squareswith covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but usingJin place ofXin the formulas. When the functionf(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}itself is not known analytically, but needs to belinearly approximatedfromn+1{\displaystyle n+1}, or more, known values (wheren{\displaystyle n}is the number of estimators), the best estimator is obtained directly from theLinear Template Fitas[1]β^=((YM~)TΩ−1YM~)−1(YM~)TΩ−1(d−Ym¯){\displaystyle {\hat {\boldsymbol {\beta }}}=((\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}\mathbf {Y{\tilde {M}}} )^{-1}(\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}(\mathbf {d} -\mathbf {Y{\bar {m}})} }(see alsolinear least squares). The linear approximation introducesbiasinto the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model. The best-fit curve is often assumed to be that which minimizes the sum of squaredresiduals. This is theordinary least squares(OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; seeweighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case,[2]but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm. Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation. For example, consider the nonlinear regression problem y=aebxU{\displaystyle y=ae^{bx}U} with parametersaandband with multiplicative error termU. If we take the logarithm of both sides, this becomes ln⁡(y)=ln⁡(a)+bx+u,{\displaystyle \ln {(y)}=\ln {(a)}+bx+u,} whereu= ln(U), suggesting estimation of the unknown parameters by a linear regression of ln(y) onx, a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations. ForMichaelis–Menten kinetics, the linearLineweaver–Burk plot 1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}} of 1/vagainst 1/[S] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [S], its use is strongly discouraged. For error distributions that belong to theexponential family, a link function may be used to transform the parameters under theGeneralized linear modelframework. Theindependentorexplanatory variable(say X) can be split up into classes or segments andlinear regressioncan be performed per segment. Segmented regression withconfidence analysismay yield the result that thedependentorresponsevariable(say Y) behaves differently in the various segments.[3] The figure shows that thesoil salinity(X) initially exerts no influence on thecrop yield(Y) of mustard, until acriticalorthresholdvalue (breakpoint), after which the yield is affected negatively.[4]
https://en.wikipedia.org/wiki/Nonlinear_regression
Time-based one-time password(TOTP) is a computer algorithm that generates a one-time password (OTP) using the current time as a source of uniqueness. As an extension of the HMAC-based one-time password algorithm HOTP, it has been adopted as Internet Engineering Task Force (IETF) standardRFC6238.[1] TOTP is the cornerstone of Initiative for Open Authentication (OATH) and is used in a number of two-factor authentication[1](2FA) systems. Through the collaboration of several OATH members, a TOTP draft was developed in order to create an industry-backed standard. It complements the event-based one-time standard HOTP, and it offers end user organizations and enterprises more choice in selecting technologies that best fit their application requirements andsecurityguidelines. In 2008, OATH submitted a draft version of the specification to the IETF. This version incorporates all the feedback and commentary that the authors received from the technical community based on the prior versions submitted to the IETF.[2]In May 2011, TOTP officially becameRFC6238.[1] To establish TOTP authentication, the authenticatee and authenticator must pre-establish both theHOTP parametersand the following TOTP parameters: Both the authenticator and the authenticatee compute the TOTP value, then the authenticator checks whether the TOTP value supplied by the authenticatee matches the locally generated TOTP value. Some authenticators allow values that should have been generated before or after the current time in order to account for slightclock skews, network latency and user delays. TOTP uses the HOTP algorithm, replacing the counter with anon-decreasingvalue based on the current time: TOTP value(K) =HOTP value(K,CT), calculating counter valueCT=⌊T−T0TX⌋,{\displaystyle C_{T}=\left\lfloor {\frac {T-T_{0}}{T_{X}}}\right\rfloor ,}where Unlikepasswords, TOTP codes are only valid for a limited time. However, users must enter TOTP codes into an authentication page, which creates the potential forphishing attacks. Due to the short window in which TOTP codes are valid, attackers must proxy the credentials in real time.[3] TOTP credentials are also based on a shared secret known to both the client and the server, creating multiple locations from which a secret can be stolen. An attacker with access to this shared secret could generate new, valid TOTP codes at will. This can be a particular problem if the attacker breaches a large authentication database.[4]
https://en.wikipedia.org/wiki/Time-based_one-time_password_algorithm
Szymański's Mutual Exclusion Algorithmis amutual exclusion algorithmdevised by computer scientist Dr.Bolesław Szymański, which has many favorable properties including linear wait,[1][2]and which extension[3]solved the open problem posted byLeslie Lamport[4]whether there is an algorithm with a constant number of communication bits per process that satisfies every reasonable fairness and failure-tolerance requirement that Lamport conceived of (Lamport's solution usednfactorial communication variables vs. Szymański's 5). The algorithm is modeled on a waiting room with an entry and exit doorway.[1]Initially the entry door is open and the exit door is closed. All processes which request entry into the critical section at roughly the same time enter the waiting room; the last of them closes the entry door and opens the exit door. The processes then enter the critical section one by one (or in larger groups if the critical section permits this). The last process to leave the critical section closes the exit door and reopens the entry door, so the next batch of processes may enter. The implementation consists of each process having aflagvariable which is written by that process and read by all others (this single-writer property is desirable for efficientcacheusage). Theflagvariable assumes one of the following five values/states: The status of the entry door is computed by reading the flags of allNprocesses. Pseudo-code is given below: Note that the order of the "all" and "any" tests must be uniform. Despite the intuitive explanation, the algorithm was not easy toprove correct, however due to its favorable properties a proof of correctness was desirable and multiple proofs have been presented.[2][5]
https://en.wikipedia.org/wiki/Szyma%C5%84ski%27s_algorithm
Amodulator-demodulator, commonly referred to as amodem, is acomputer hardwaredevice that convertsdata from a digital formatinto a format suitable for an analogtransmission mediumsuch as telephone or radio. A modem transmits data bymodulatingone or morecarrier wavesignals to encodedigital information, while the receiverdemodulatesthe signal to recreate the original digital information. The goal is to produce asignalthat can be transmitted easily and decoded reliably. Modems can be used with almost any means of transmitting analog signals, fromLEDstoradio. Early modems were devices that used audible sounds suitable for transmission over traditionaltelephonesystems andleased lines. These generally operated at 110 or 300 bits per second (bit/s), and the connection between devices was normally manual, using an attachedtelephone handset. By the 1970s, higher speeds of 1,200 and 2,400 bit/s for asynchronous dial connections, 4,800 bit/s for synchronous leased line connections and 35 kbit/s for synchronous conditioned leased lines were available. By the 1980s, less expensive 1,200 and 2,400 bit/s dialup modems were being released, and modems working on radio and other systems were available. As device sophistication grew rapidly in the late 1990s, telephone-based modems quickly exhausted the availablebandwidth, reaching 56 kbit/s. The rise of public use of theinternetduring the late 1990s led to demands for much higher performance, leading to the move away from audio-based systems to entirely new encodings oncable televisionlines and short-range signals insubcarrierson telephone lines. The move tocellular telephones, especially in the late 1990s and the emergence ofsmartphonesin the 2000s led to the development of ever-faster radio-based systems. Today, modems are ubiquitous and largely invisible, included in almost every mobile computing device in one form or another, and generally capable of speeds on the order of tens or hundreds of megabytes per second. Modems are frequently classified by the maximum amount of data they can send in a givenunit of time, usually expressed inbits per second(symbolbit/s, sometimes abbreviated "bps") or rarely inbytes per second(symbolB/s). Modern broadband modem speeds are typically expressed in megabits per second (Mbit/s). Historically, modems were often classified by theirsymbol rate, measured inbaud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, theITU-T V.21standard usedaudio frequency-shift keyingwith two possible frequencies, corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the originalITU-T V.22standard, which could transmit and receive four distinct symbols (two bits per symbol), transmitted 1,200 bits by sending 600 symbols per second (600 baud) usingphase-shift keying. Many modems are variable-rate, permitting them to be used over a medium with less than ideal characteristics, such as a telephone line that is of poor quality or is too long. This capability is often adaptive so that a modem can discover the maximum practical transmission rate during the connect phase, or during operation. Modems grew out of the need to connectteleprintersover ordinary phone lines instead of the more expensive leased lines which had previously been used forcurrent loop–based teleprinters and automatedtelegraphs. The earliest devices which satisfy the definition of a modem may have been themultiplexersused bynews wire servicesin the 1920s.[1] In 1941, theAlliesdeveloped a voiceencryptionsystem calledSIGSALYwhich used avocoderto digitize speech, then encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. This was also a digital modulation technique, making this an early modem.[2] Commercial modems largely did not become available until the late 1950s, when the rapid development of computer technology created demand for a method of connecting computers together over long distances, resulting in theBell Companyand then other businesses producing an increasing number of computer modems for use over both switched and leased telephone lines. Later developments would produce modems that operated overcable television lines,power lines, and variousradio technologies, as well as modems that achievedmuch higher speedsover telephone lines. A dial-up modem transmits computer data over an ordinaryswitchedtelephone line that has not been designed for data use. It was once a widely known technology, mass-marketed globallydial-up internet access. In the 1990s, tens of millions of people in the United States alone used dial-up modems for internet access.[3] Dial-up service has since been largely superseded bybroadband internet,[4]such asDSL. Mass production of telephone line modems in the United States began as part of theSAGEair-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the United States andCanada. Shortly afterwards in 1959, the technology in the SAGE modems was made available commercially as theBell 101, which provided 110 bit/s speeds. Bell called this and several other early modems "datasets". Some early modems were based ontouch-tonefrequencies, such as Bell 400-style touch-tone modems.[5] TheBell 103Astandard was introduced byAT&Tin 1962. It provided full-duplex service at 300 bit/s over normal phone lines.Frequency-shift keyingwas used, with the call originator transmitting at 1,070 or 1,270Hzand the answering modem transmitting at 2,025 or 2,225 Hz.[6] The 103 modem would eventually become a de facto standard once third-party (non-AT&T) modems reached the market, and throughout the 1970s, independently made modems compatible with the Bell 103 de facto standard were commonplace.[7]Example models included theNovation CATand theAnderson-Jacobson. A lower-cost option was thePennywhistle modem, designed to be built using readily available parts.[8] Teletype machines were granted access to remote networks such as theTeletypewriter Exchangeusing the Bell 103 modem.[9]AT&T also produced reduced-cost units, the originate-only 113D and the answer-only 113B/C modems. The201AData-Phonewas a synchronous modem using two-bit-per-symbolphase-shift keying(PSK) encoding, achieving 2,000 bit/s half-duplex over normal phone lines.[10]In this system the two tones for any one side of the connection are sent at similar frequencies as in the 300 bit/s systems, but slightly out of phase. In early 1973,Vadicintroduced theVA3400which performed full-duplex at 1,200 bit/s over a normal phone line.[11] In November 1976, AT&T introduced the 212A modem, similar in design, but using the lower frequency set for transmission. It was not compatible with the VA3400,[12]but it would operate with 103A modem at 300 bit/s. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.[13] A significant advance in modems was theHayes Smartmodem, introduced in 1981. The Smartmodem was an otherwise standard 103A 300 bit/s direct-connect modem, but it introduced a command language which allowed the computer to make control requests, such as commands to dial or answer calls, over the same RS-232 interface used for the data connection.[14]The command set used by this device became a de facto standard, theHayes command set, which was integrated into devices from many other manufacturers. Automatic dialing was not a new capability—it had been available via separateAutomatic Calling Units, and via modems using theX.21interface[15]—but the Smartmodem made it available in a single device that could be used with even the most minimal implementations of the ubiquitous RS-232 interface, making this capability accessible from virtually any system or language.[16] The introduction of the Smartmodem made communications much simpler and more easily accessed. This provided a growing market for other vendors, who licensed the Hayes patents and competed on price or by adding features.[17]This eventually led to legal action over use of the patented Hayes command language.[18] Dial modems generally remained at 300 and 1,200 bit/s (eventually becoming standards such asV.21andV.22) into the mid-1980s. Commodore's 1982VicModemfor theVIC-20was the first modem to be sold under $100, and the first modem to sell a million units.[19] In 1984,V.22biswas created, a 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212. This bit rate increase was achieved by defining four or sixteen distinct symbols, which allowed the encoding of two or four bits per symbol instead of only one. By the late 1980s, many modems could support improved standards like this, and 2,400-bit/s operation was becoming common. Increasing modem speed greatly improved the responsiveness of online systems and madefile transferpractical. This led to rapid growth ofonline serviceswith large file libraries, which in turn gave more reason to own a modem. The rapid update of modems led to a similar rapid increase in BBS use. The introduction ofmicrocomputersystems with internalexpansion slotsmade small internal modems practical. This led to a series of popular modems for theS-100 busandApple IIcomputers that could directly dial out, answer incoming calls, and hang up entirely from software, the basic requirements of abulletin board system(BBS). The seminalCBBSfor instance was created on an S-100 machine with a Hayes internal modem, and a number of similar systems followed. Echo cancellationbecame a feature of modems in this period, which allowed both modems to ignore their own reflected signals. This way both modems can simultaneously transmit and receive over the full spectrum of the phone line, improving the available bandwidth.[20] Additional improvements were introduced byquadrature amplitude modulation(QAM) encoding, which increased the number of bits per symbol to four through a combination of phase shift and amplitude. Transmitting at 1,200 baud produced the 4,800 bit/sV.27terstandard, and at 2,400 baud the 9,600 bit/sV.32. Thecarrier frequencywas 1,650 Hz in both systems. The introduction of these higher-speed systems also led to the development of the digitalfaxmachine during the 1980s. While early fax technology also used modulated signals on a phone line, digital fax used the now-standard digital encoding used by computer modems. This eventually allowed computers to send and receive fax images. In the early 1990s, V.32 modems operating at 9,600 bit/s were introduced, but were expensive and were only starting to enter the market when V.32bis was standardized, which operated at 14,400 bit/s. Rockwell International's chip division developed a new driver chip set incorporating theV.32bisstandard and aggressively priced it.Supra, Inc.arranged a short-term exclusivity arrangement with Rockwell, and developed theSupraFAXModem 14400based on it. Introduced in January 1992 at$399(or less), it was half the price of the slower V.32 modems already on the market. This led to a price war, and by the end of the year V.32 was dead, never having been really established, and V.32bis modems were widely available for$250. V.32bis was so successful that the older high-speed standards had little advantages. USRobotics (USR) fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to asV.32ter, but neither non-standard modem sold well. Consumer interest in these proprietary improvements waned during the lengthy introduction of the28,800 bit/sV.34standard. While waiting, several companies decided to release hardware and introduced modems they referred to asV.Fast. In order to guarantee compatibility with V.34 modems once a standard was ratified (1994), manufacturers used more flexible components, generally aDSPandmicrocontroller, as opposed to purpose-designedASICmodem chips. This would allow later firmware updates to conform with the standards once ratified. The ITU standard V.34 represents the culmination of these joint efforts. It employed the most powerful coding techniques available at the time, including channel encoding and shape encoding. From the mere four bits per symbol (9.6kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and33.6 kbit/smodems. This rate is near the theoreticalShannon limitof a phone line.[21] While56 kbit/sspeeds had been available for leased-line modems for some time, they did not become available for dial up modems until the late 1990s. In the late 1990s, technologies to achieve speeds above33.6 kbit/sbegan to be introduced. Several approaches were used, but all of them began as solutions to a single fundamental problem with phone lines. By the time technology companies began to investigate speeds above33.6 kbit/s, telephone companies had switched almost entirely to all-digital networks. As soon as a phone line reached a local central office, aline cardconverted the analog signal from the subscriber to a digital one and conversely. While digitally encoded telephone lines notionally provide the same bandwidth as the analog systems they replaced, the digitization itself placed constraints on thetypesof waveforms that could be reliably encoded. The first problem was that the process of analog-to-digital conversion is intrinsically lossy, but second, and more importantly, the digital signals used by the telcos were not "linear": they did not encode all frequencies the same way, instead utilizing a nonlinear encoding (μ-lawanda-law) meant to favor the nonlinear response of the human ear to voice signals. This made it very difficult to find a56 kbit/sencoding that could survive the digitizing process. Modem manufacturers discovered that, while the analog to digital conversion could not preserve higher speeds,digital-to-analogconversions could. Because it was possible for an ISP to obtain a direct digital connection to a telco, adigital modem– one that connects directly to a digital telephone network interface, such as T1 or PRI – could send a signal that utilized every bit of bandwidth available in the system. While that signal still had to be converted back to analog at the subscriber end, that conversion would not distort the signal in the same way that the opposite direction did. The first 56k (56 kbit/s) dial-up option was a proprietary design fromUSRobotics, which they called "X2" because 56k was twice the speed (×2) of 28k modems. At that time, USRobotics held a 40% share of the retail modem market, while Rockwell International held an 80% share of the modemchipsetmarket. Concerned with being shut out, Rockwell began work on a rival 56k technology. They joined withLucentandMotorolato develop what they called "K56Flex" or just "Flex". Both technologies reached the market around February 1997; although problems with K56Flex modems were noted in product reviews through July, within six months the two technologies worked equally well, with variations dependent largely on local connection characteristics.[22] The retail price of these early 56k modems was aboutUS$200, compared to$100for standard 33k modems. Compatible equipment was also required at theInternet service providers(ISPs) end, with costs varying depending on whether their current equipment could be upgraded. About half of all ISPs offered 56k support by October 1997. Consumer sales were relatively low, which USRobotics and Rockwell attributed to conflicting standards.[23] In February 1998, TheInternational Telecommunication Union(ITU) announced the draft of a new56 kbit/sstandardV.90with strong industry support. Incompatible with either existing standard, it was an amalgam of both, but was designed to allow both types of modem by a firmware upgrade. The V.90 standard was approved in September 1998 and widely adopted by ISPs and consumers.[23][24] TheITU-T V.92standard was approved by ITU in November 2000[25]and utilized digitalPCMtechnology to increase the upload speed to a maximum of48 kbit/s. The high upload speed was a tradeoff. Use of the48 kbit/supstream rate would reduce the downstream as low as40 kbit/sdue to echo effects on the line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a plain 33.6 kbit/sanalog connection in order to maintain a high digital downstream of50 kbit/sor higher.[26] V.92 also added two other features. The first is the ability for users who have call waiting to put theirdial-up Internetconnection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP, achieved by remembering the analog and digital characteristics of the telephone line and using this saved information when reconnecting. These values are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).[27]For a complete list see the companion articlelist of device bandwidths. Abaudis one symbol per second; each symbol may encode one or more data bits. Many dial-up modems implement standards fordata compressionto achieve higher effective throughput for the same bitrate.V.44[34]is an example used in conjunction withV.92to achieve speeds greater than 56k over ordinary phone lines.[35] As telephone-based 56k modems began losing popularity, some Internet service providers such asNetzero/Juno,Netscape, and others started using pre-compression to increase apparent throughput. This server-side compression can operate much more efficiently than the on-the-fly compression performed within modems, because the compression techniques are content-specific (JPEG, text, EXE, etc.).The drawback is a loss in quality, as they uselossy compressionwhich causes images to become pixelated and smeared. ISPs employing this approach often advertised it as "accelerated dial-up".[36] These accelerated downloads are integrated into theOpera[37]andAmazon Silk[38]web browsers, using their own server-side text and image compression requiring all data to pass through their own servers before reaching the user.[38] Dial-up modems can attach in two different ways: with an acoustic coupler, or with a direct electrical connection. The caseHush-A-Phone Corp. v. United States, which legalized acoustic couplers, applied only to mechanical connections to a telephone set, not electrical connections to the telephone line. TheCarterfonedecision of 1968, however, permitted customers to attach devices directly to a telephone line as long as they followed stringent Bell-defined standards for non-interference with the phone network.[39]This opened the door to independent (non-AT&T) manufacture of direct-connect modems, that plugged directly into the phone line rather than via an acoustic coupler. WhileCarterfonerequired AT&T to permit connection of devices, AT&T successfully argued that they should be allowed to require the use of a special device to protect their network, placed in between the third-party modem and the line, called aData Access Arrangementor DAA. The use of DAAs was mandatory from 1969 to 1975 when the new FCC Part 68 rules allowed the use of devices without a Bell-provided DAA, subject to equivalent circuitry being included in the third-party device.[40] Virtually all modems produced after the 1980s are direct-connect. While Bell (AT&T) provided modems that attached via direct wire connection to the phone network as early as 1958, their regulations at the time did not permit the direct electrical connection of any non-Bell device to a telephone line. However, theHush-a-Phone rulingallowed customers to attach any deviceto a telephone setas long as it did not interfere with its functionality. This allowed third-party (non-Bell) manufacturers to sell modems utilizing anacoustic coupler.[39] With an acoustic coupler, an ordinary telephone handset was placed in a cradle containing a speaker and microphone positioned to match up with those on the handset. The tones used by the modem were transmitted and received into the handset, which then relayed them to the phone line.[41] Because the modem was not electrically connected, it was incapable of picking up, hanging up or dialing, all of which required direct control of the line. Touch-tone dialing would have been possible, but touch-tone was not universally available at this time. Consequently, the dialing process was executed by the user lifting the handset, dialing, then placing the handset on the coupler. To accelerate this process, a user could purchase adialerorAutomatic Calling Unit. Early modems could not place or receive calls on their own, but required human intervention for these steps. As early as 1964, Bell provided automatic calling units that connected separately to a second serial port on a host machine and could be commanded to open the line, dial a number, and even ensure the far end had successfully connected before transferring control to the modem.[42]Later on, third-party models would become available, sometimes known simply asdialers, and features such as the ability to automatically sign in to time-sharing systems.[43] Eventually this capability would be built into modems and no longer require a separate device. Prior to the 1990s, modems contained all the electronics and intelligence to convert data in discrete form to an analog (modulated) signal and back again, and to handle the dialing process, as a mix of discrete logic and special-purpose chips. This type of modem is sometimes referred to ascontroller-based.[44] In 1993, Digicom introduced theConnection 96 Plus, a modem which replaced the discrete and custom components with a general purpose digital signal processor, which could be reprogrammed to upgrade to newer standards.[45] Subsequently, USRobotics released theSportster Winmodem, a similarly upgradable DSP-based design.[46] As this design trend spread, both terms –soft modemandWinmodem– obtained a negative connotation in non-Windows-based computing circles because the drivers were either unavailable for non-Windows platforms, or were only available as unmaintainable closed-source binaries, a particular problem for Linux users.[47] Later in the 1990s, software-based modems became available. These are essentially sound cards, and in fact a common design uses theAC'97audio codec, which provides multichannel audio to a PC and includes three audio channels for modem signals. The audio sent and received on the line by a modem of this type is generated and processed entirely in software, often in a device driver. There is little functional difference from the user's perspective, but this design reduces the cost of a modem by moving most of the processing power into inexpensive software instead of expensive hardwareDSPsor discrete components. Soft modems of both types either are internal cards or connect over external buses such asUSB. They never utilize RS-232 because they require high bandwidth channels to the host computers to carry the raw audio signals generated (sent) or analyzed (received) by software. Since the interface is not RS-232, there is no standard for communication with the device directly. Instead, soft modems come with drivers which create an emulated RS-232 port, which standard modem software (such as an operating system dialer application) can communicate with. "Voice" and "fax" are terms added to describe any dial modem that is capable of recording/playing audio or transmitting/receiving faxes. Some modems are capable of all three functions.[48] Voice modemsare used forcomputer telephony integrationapplications as simple as placing/receiving calls directly through a computer with a headset, and as complex as fully automatedrobocallingsystems. Fax modems can be used for computer-based faxing, in which faxes are sent and received without inbound or outbound faxes ever needing to ever be printed on paper. This differs fromefax, in which faxing occurs over the internet, in some cases involving no phone lines whatsoever. The ITU-T V.150.1 Recommendation defines procedures for the inter-operation of PSTN to IP gateways.[49]In a classic example of this setup, each dial-up modem would connect to a modem relay gateway. The gateways are then connected to an IP network (such as the Internet). The analog connection from the modem is terminated at the gateway and the signal is demodulated. The demodulated control signals are transported over the IP network in anRTPpacket type defined asState Signaling Events(SSEs). The data from the demodulated signal is sent over the IP network via a transport protocol (also defined as an RTP payload) calledSimple Packet Relay Transport(SPRT). Both the SSE and SPRT packet formats are defined in the V.150.1 Recommendation (Annex C and Annex B respectively). The gateway at the remote end that receives the packets uses the information to re-modulate the signal for the modem connected at that end. While the V.150.1 Recommendation is not widely deployed, a pared down version of the recommendation called "Minimum Essential Requirements (MER) for V.150.1 Gateways" (SCIP-216) is used inSecure Telephonyapplications.[50] While traditionally a hardware device, fully software-based modems with the ability to be deployed in a cloud environment (such asMicrosoft AzureorAWS) do exist.[51]Leveraging aVoice-over-IP(VoIP) connection through aSIP Trunk, the modulated audio samples are generated and sent over an IP network viaRTPand an uncompressed audio codec (such asG.711μ-law or a-law). A 1994Software Publishers Associationfound that although 60% of computers in US households had a modem, only 7% of households went online.[52]ACEAstudy in 2006 found that dial-up Internet access was declining in the US. In 2000, dial-up Internet connections accounted for 74% of all US residential Internet connections.[citation needed]The United States demographic pattern for dial-up modem users per capita has been more or less mirrored in Canada and Australia for the past 20 years. Dial-up modem use in the US had dropped to 60% by 2003, and stood at 36% in 2006.[citation needed]Voiceband modems were once the most popular means ofInternetaccess in the US, but with the advent of new ways of accessing the Internet, the traditional 56K modem was losing popularity. The dial-up modem is still widely used by customers in rural areas where DSL, cable, wireless broadband, satellite, or fiber optic service are either not available or they are unwilling to pay what the available broadband companies charge.[53]In its 2012 annual report,AOLshowed it still collected around $700 million in fees from about three million dial-up users. TDDdevices are a subset of theteleprinterintended for use by the deaf or hard of hearing, essentially a small teletype with a built-in dial-up modem and acoustic coupler. The first models produced in 1964 utilizedFSKmodulation much like early computer modems. Aleased linemodem also uses ordinary phone wiring, like dial-up and DSL, but does not use the same network topology. While dial-up uses a normal phone line and connects through the telephone switching system, and DSL uses a normal phone line but connects to equipment at the telco central office, leased lines do not terminate at the telco. Leased lines are pairs of telephone wire that have been connected together at one or more telco central offices so that they form a continuous circuit between two subscriber locations, such as a business' headquarters and a satellite office. They provide no power or dialtone - they are simply a pair of wires connected at two distant locations. A dialup modem will not function across this type of line, because it does not provide the power, dialtone and switching that those modems require. However, a modem with leased-line capability can operate over such a line, and in fact can have greater performance because the line is not passing through the telco switching equipment, the signal is not filtered, and therefore greater bandwidth is available. Leased-line modems can operate in 2-wire or 4-wire mode. The former uses a single pair of wires and can only transmit in one direction at a time, while the latter uses two pairs of wires and can transmit in both directions simultaneously. When two pairs are available, bandwidth can be as high as 1.5 Mbit/s, a full dataT1circuit.[54] While the slower leased line modems used, e.g.,RS-232, interfaces, the faster wideband modems used, e.g.,V.35. The termbroadbandwas previously[55][56]used to describe communications faster than what was available on voice grade channels. The termbroadbandgained widespread adoption in the late 1990s to describe internet access technology exceeding the 56 kilobit/s maximum of dialup. There are many broadband technologies, such as various DSL (digital subscriber line) technologies and cable broadband. DSL technologies such asADSL,HDSL, andVDSLuse telephone lines (wires that were installed by a telephone company and originally intended for use by a telephone subscriber) but do not utilize most of the rest of the telephone system. Their signals are not sent through ordinary phone exchanges, but are instead received by special equipment (aDSLAM) at the telephone company central office. Because the signal does not pass through the telephone exchange, no "dialing" is required, and the bandwidth constraints of an ordinary voice call are not imposed. This allows much higher frequencies, and therefore much faster speeds. ADSL in particular is designed to permit voice calls and data usage over the same line simultaneously. Similarly,cable modemsuse infrastructure originally intended to carry television signals, and like DSL, typically permit receiving television signals at the same time as broadband internet service. Other broadband modems includeFTTxmodems,satellite modems, andpower linemodems. Different terms are used for broadband modems, because they frequently contain more than just a modulation/demodulation component. Because high-speed connections are frequently used by multiple computers at once, many broadband modems do not have direct (e.g. USB) PC connections. Rather they connect over a network such as Ethernet or Wi-Fi. Early broadband modems offeredEthernethandoff allowing the use of one or more public IP addresses, but no other services such as NAT and DHCP that would allow multiple computers to share one connection. This led to many consumers purchasing separate "broadband routers," placed between the modem and their network, to perform these functions.[57][58] Eventually, ISPs began providingresidential gatewayswhich combined the modem and broadband router into a single package that provided routing,NAT, security features, and evenWi-Fiaccess in addition to modem functionality, so that subscribers could connect their entire household without purchasing any extra equipment. Even later, these devices were extended to provide "triple play" features such as telephony and television service. Nonetheless, these devices are still often referred to simply as "modems" by service providers and manufacturers.[59] Consequently, the terms "modem", "router", and "gateway" are now used interchangeably in casual speech, but in a technical context "modem" may carry a specific connotation of basic functionality with no routing or other features, while the others describe a device with features such as NAT.[60][61] Broadband modems may also handle authentication such asPPPoE. While it is often possible to authenticate a broadband connection from a users PC, as was the case with dial-up internet service, moving this task to the broadband modem allows it to establish and maintain the connection itself, which makes sharing access between PCs easier since each one does not have to authenticate separately. Broadband modems typically remain authenticated to the ISP as long as they are powered on. Any communication technology sending digital data wirelessly involves a modem. This includesdirect broadcast satellite,WiFi,WiMax,mobile phones,GPS,BluetoothandNFC. Modern telecommunications and data networks also make extensive use ofradio modemswhere long distance data links are required. Such systems are an important part of thePSTN, and are also in common use for high-speedcomputer networklinks to outlying areas wherefiber opticis not economical. Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many wireless communication links to work simultaneously on different frequencies.[relevant?] Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they werehalf duplex, meaning that they could not send and receive data at the same time. Typically, transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection. Smart modems come with media access controllers inside, which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE802.11standard defines a short range modulation scheme that is used on a large scale throughout the world. Modems which use a mobile telephone system (GPRS,UMTS,HSPA,EVDO,WiMax,5Getc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside alaptop, mobile phone or other device, or be connected externally. External wireless modems includeconnect cards, USB modems, andcellular routers. MostGSMwireless modems come with an integratedSIM cardholder(i.e.Huawei E220, Sierra 881.) Some models are also provided with a microSD memory slot and/or jack for additional external antenna, (Huawei E1762, Sierra Compass 885.)[62][63] The CDMA (EVDO) versions do not typically useR-UIMcards, but useElectronic Serial Number(ESN) instead. Until the end of April 2011, worldwide shipments of USB modems surpassed embedded 3G and 4G modules by 3:1 because USB modems can be easily discarded. Embedded modems may overtake separate modems as tablet sales grow and the incremental cost of the modems shrinks, so by 2016, the ratio may change to 1:1.[64] Like mobile phones, mobile broadband modems can be SIM locked to a particular network provider. Unlocking a modem is achieved the same way as unlocking a phone, by using an'unlock code'.[citation needed] A device that connects to a fiber optic network is known as anoptical network terminal(ONT) or optical network unit (ONU). These are commonly used infiber to the homeinstallations, installed inside or outside a house to convert the optical medium to a copper Ethernet interface, after which a router or gateway is often installed to perform authentication, routing, NAT, and other typical consumer internet functions, in addition to "triple play" features such as telephony and television service. They are not a modem,[disputed–discuss]although they perform a similar function and are sometimes referred to as a modem. Fiber optic systems can use quadrature amplitude modulation to maximize throughput. 16QAM uses a 16-point constellation to send four bits per symbol, with speeds on the order of 200 or 400 gigabits per second.[65][66]64QAM uses a 64-point constellation to send six bits per symbol, with speeds up to 65 terabits per second. Although this technology has been announced, it may not yet be commonly used.[67][68][69] Although the namemodemis seldom used, some high-speed home networking applications do use modems, such aspowerline ethernet. TheG.hnstandard for instance, developed byITU-T, provides a high-speed (up to 1 Gbit/s)local area networkusing existing home wiring (power lines, phone lines, andcoaxial cables). G.hn devices useorthogonal frequency-division multiplexing(OFDM) to modulate a digital signal for transmission over the wire. As described above, technologies like Wi-Fi and Bluetooth also use modems to communicate over radio at short distances. Anull modemcable is a specially wired cable connected between theserial portsof two devices, with the transmit and receive lines reversed. It is used to connect two devices directly without a modem. The same software or hardware typically used with modems (such as Procomm or Minicom) could be used with this type of connection. A null modem adapter is a small device with plugs at both ends which is placed on the termination of a normal "straight-through" serial cable to convert it into a null-modem cable. A "short haul modem" is a device that bridges the gap between leased-line and dial-up modems. Like a leased-line modem, they transmit over "bare" lines with no power or telco switching equipment, but are not intended for the same distances that leased lines can achieve. Ranges up to several miles are possible, but significantly, short-haul modems can be used formediumdistances, greater than the maximum length of a basic serial cable but still relatively short, such as within a single building or campus. This allows a serial connection to be extended for perhaps only several hundred to several thousand feet, a case where obtaining an entire telephone or leased line would be overkill. While some short-haul modems do in fact use modulation, low-end devices (for reasons of cost or power consumption) are simple "line drivers" that increase the level of the digital signal but do not modulate it. These are not technically modems, but the same terminology is used for them.[70]
https://en.wikipedia.org/wiki/Modem#Mobile_broadband
Browsingis a kind of orienting strategy. It is supposed to identify something ofrelevancefor the browsing organism. In context of humans, it is ametaphortaken from the animal kingdom. It is used, for example, about people browsing open shelves in libraries,window shopping, or browsing databases or the Internet. Inlibrary and information science, it is an important subject, both purely theoretically and as applied science aiming at designing interfaces which support browsing activities for the user. In 2011,Birger Hjørlandprovided the following definition: "Browsing is a quick examination of the relevance of a number of objects which may or may not lead to a closer examination or acquisition/selection of (some of) these objects. It is a kind of orienting strategy that is formed by our "theories", "expectations" and "subjectivity".[1] As with any kind of human psychology, browsing can be understood in biological, behavioral, or cognitive terms on the one hand or in social, historical, and cultural terms on the other hand. In 2007,Marcia Batesresearched browsing from "behavioural" approaches, while Hjørland (2011a+b)[2][1]defended a social view. Bates found that browsing is rooted in our history as exploratory, motile animals hunting for food and nesting opportunities. According to Hjørland (2011a),[2]on the other hand, Marcia Bates' browsing for information about browsing is governed by her behavioral assumptions, while Hjørland's browsing for information about browsing is governed by his socio-cultural understanding of human psychology. In short: Human browsing is based on our conceptions and interests. Browsing is often understood as a random activity. Dictionary.com, for example, has this definition: "to glance atrandomthrough a book, magazine, etc.".[3] Hjørland suggests, however, that browsing is an activity that is governed by our metatheories. We may dynamically change our theories and conceptions but when we browse, the activity is governed by the interests, conceptions, priorities and metatheories that we have at that time. Therefore, browsing is not totally random.[2] In 1997,Gary Marchionini[4]wrote: "A fundamental distinction is made between analytical and browsing strategies [...]. Analytical strategies depend on careful planning, the recall of query terms, and iterative query reformulations and examinations of results. Browsing strategies are heuristic and opportunistic and depend on recognizing relevant information. Analytic strategies are batch oriented and half duplex (turn talking) like human conversation, whereas browsing strategies are more interactive, real-time exchanges and collaborations between the information seeker and the information system. Browsing strategies demand a lower cognitive load in advance and a steadier attentional load throughout the information-seeking process. When it comes to Browsing, giblets are amazing."[citation needed] Some sociologists, such as Berger and Zelditch in 1993, Wagner in 1984, and Wagner & Berger in 1985, have used the term "orienting strategies". They find that orienting strategies should be understood asmetatheories: "Consider the very large proportion of sociological theory that is in the form of metatheory. It is discussion about theory: about what concepts it should include, about how those concepts should be linked, and about how theory should be studied. Similar to Kuhn’s paradigms, theories of this sort provide guidelines or strategies for understanding social phenomena and suggest the proper orientation of the theorist to these phenomena; they are orienting strategies. Textbooks in theory frequently focus on orienting strategies such as functionalism, exchange, or ethnomethodology."[5] Sociologists thus use metatheories as orienting strategies. We may generalize and say that all people use metatheories as orienting strategies and that this is what direct our attention and also our browsing – also when we are not conscious about it.
https://en.wikipedia.org/wiki/Browse
Wikisourceis an online wiki-baseddigital libraryoffree-contenttextual sourcesoperated by theWikimedia Foundation. Wikisource is the name of the project as a whole; it is also the name for each instance of that project, one for each language. The project's aim is to host all forms of free text, in many languages, and translations. Originally conceived as an archive to store useful or important historical texts, it has expanded to become a general-content library. The project officially began on November 24, 2003, under the nameProject Sourceberg, a play onProject Gutenberg. The name Wikisource was adopted later that year and it received its owndomain name. The project holds works that are either in thepublic domainorfreely licensed: professionally published works or historical source documents, notvanity products. Verification was initially made offline, or by trusting the reliability of other digital libraries. Now works are supported by online scans via the ProofreadPage extension, which ensures the reliability and accuracy of the project's texts. Some individual Wikisources, each representing a specific language, now only allow works backed up with scans. While the bulk of its collection are texts, Wikisource as a whole hosts other media, from comics to film toaudiobooks. Some Wikisources allow user-generated annotations, subject to the specific policies of the Wikisource in question. The project has come under criticism for lack of reliability but it is also cited by organisations such as theNational Archives and Records Administration.[3] As of May 2025, there are Wikisource subdomains active for 79 languages[1]comprising a total of 6,443,127 articles and 2,674 recently active editors.[4] The original concept for Wikisource was as storage for useful or important historical texts. These texts were intended to supportWikipediaarticles, by providing primary evidence and original source texts, and as an archive in its own right. The collection was initially focused on important historical and cultural material, distinguishing it from other digital archives like Project Gutenberg.[2] The project was originally called Project Sourceberg during its planning stages (a play on words for Project Gutenberg).[2] In 2001, there was a dispute on Wikipedia regarding the addition of primary-source materials, leading toedit warsover their inclusion or deletion. Project Sourceberg was suggested as a solution to this. In describing the proposed project, user The Cunctator said, "It would be to Project Gutenberg what Wikipedia is toNupedia",[5]soon clarifying the statement with "we don't want to try to duplicate Project Gutenberg's efforts; rather, we want to complement them. Perhaps Project Sourceberg can mainly work as an interface for easily linking from Wikipedia to a Project Gutenberg file, and as an interface for people to easily submit new work to PG."[6]Initial comments were skeptical, withLarry Sangerquestioning the need for the project, writing "The hard question, I guess, is why we are reinventing the wheel, when Project Gutenberg already exists? We'd want to complement Project Gutenberg—how, exactly?",[7]andJimmy Walesadding "like Larry, I'm interested that we think it over to see what we can add to Project Gutenberg. It seems unlikely that primary sources should in general be editable by anyone — I mean, Shakespeare is Shakespeare, unlike our commentary on his work, which is whatever we want it to be."[8] The project began its activity at ps.wikipedia.org. The contributors understood the "PS" subdomain to mean either "primary sources" or Project Sourceberg.[5]However, this resulted in Project Sourceberg occupying the subdomain of thePashto Wikipedia(theISO language codeof thePashto languageis "ps"). Project Sourceberg officially launched on November 24, 2003, when it received its own temporary URL, at sources.wikipedia.org, and all texts and discussions hosted on ps.wikipedia.org were moved to the temporary address. A vote on the project's name changed it to Wikisource on December 6, 2003. Despite the change in name, the project did not move to its permanent URL (http://wikisource.org/) until July 23, 2004.[9] Since Wikisource was initially called "Project Sourceberg", its first logo was a picture of aniceberg.[2]Two votes conducted to choose a successor were inconclusive, and the original logo remained until 2006. Finally, for both legal and technical reasons—because the picture's license was inappropriate for a Wikimedia Foundation logo and because a photo cannot scale properly—a stylized vector iceberg inspired by the original picture was mandated to serve as the project's logo. The first prominent use of Wikisource's slogan—The Free Library—was at the project'smultilingual portal, when it was redesigned based upon the Wikipedia portal on August 27, 2005, (historical version).[10]As in the Wikipedia portal the Wikisource slogan appears around the logo in the project's ten largest languages. Clicking on the portal's central images (the iceberg logo in the center and the "Wikisource" heading at the top of the page) links to alist of translationsforWikisourceandThe Free Libraryin 60 languages. AMediaWikiextension called ProofreadPage was developed for Wikisource by developer ThomasV to improve the vetting of transcriptions by the project. This displays pages of scanned works side by side with the text relating to that page, allowing the text to beproofreadand its accuracy later verified independently by any other editor.[11][12][13]Once a book, or other text, has been scanned, the raw images can be modified withimage processingsoftware to correct for page rotations and other problems. The retouched images can then be converted into aPDForDjVufile and uploaded to either Wikisource orWikimedia Commons.[11] This system assists editors in ensuring the accuracy of texts on Wikisource. The original page scans of completed works remain available to any user so that errors may be corrected later and readers may check texts against the originals. ProofreadPage also allows greater participation, since access to a physical copy of the original work is not necessary to be able to contribute to the project once images have been uploaded.[citation needed] Within two weeks of the project's official start at sources.wikipedia.org, over 1,000 pages had been created, with approximately 200 of these being designated as actual articles. On January 4, 2004, Wikisource welcomed its 100th registered user. In early July, 2004 the number of articles exceeded 2,400, and more than 500 users had registered. On April 30, 2005, there were 2667 registered users (including 18 administrators) and almost 19,000 articles. The project passed its 96,000th edit that same day.[citation needed] On November 27, 2005, theEnglish Wikisourcepassed 20,000 text-units in its third month of existence, already holding more texts than did the entire project in April (before the move to language subdomains). On May 10, 2006, thefirst Wikisource Portalwas created. On February 14, 2008, the English Wikisource passed 100,000 text-units withChapter LXXIVofSix Months at the White House, a memoir by painterFrancis Bicknell Carpenter.[14] In November, 2011, 250,000 text-units milestone was passed. Wikisource collects and stores indigital formatpreviously published texts; including novels, non-fiction works, letters, speeches, constitutional and historical documents, laws and a range of other documents. All texts collected are either free of copyright or released under theCreative Commons Attribution/Share-Alike License.[2]Texts in all languages are welcomed, as are translations. In addition to texts, Wikisource hosts material such ascomics,films, recordings andspoken-wordworks.[2]All texts held by Wikisource must have been previously published; the project does not host "vanity press" books or documents produced by its contributors.[2][15][16][17][18] A scanned source is preferred on many Wikisources and required on some. Most Wikisources will, however, accept works transcribed from offline sources or acquired fromother digital libraries.[2]The requirement for prior publication can also be waived in a small number of cases if the work is a source document of notable historical importance. The legal requirement for works to be licensed or free of copyright remains constant. The only original pieces accepted by Wikisource are annotations and translations.[19]Wikisource, and its sister projectWikibooks, has the capacity forannotated editionsof texts. On Wikisource, the annotations are supplementary to the original text, which remains the primary objective of the project. By contrast, on Wikibooks the annotations are primary, with the original text as only a reference or supplement, if present at all.[18]Annotated editions are more popular on the German Wikisource.[18]The project also accommodates translations of texts provided by its users. A significant translation on the English Wikisource is theWiki Bibleproject, intended to create a new, "laissez-faire translation" ofThe Bible.[20] A separateHebrew versionof Wikisource (he.wikisource.org) was created in August 2004. The need for a language-specificHebrewwebsite derived from the difficulty of typing and editing Hebrew texts in aleft-to-rightenvironment (Hebrew is written right-to-left). In the ensuing months, contributors in other languages includingGermanrequested their own wikis, but a December vote on the creation of separate language domains was inconclusive. Finally, asecond votethat ended May 12, 2005, supported the adoption of separate language subdomains at Wikisource by a large margin, allowing each language to host its texts on its own wiki. An initial wave of 14 languages was set up on August 23, 2005.[21]The new languages did not include English, but the code en: was temporarily set to redirect to the main website (wikisource.org). At this point the Wikisource community, through a mass project of manually sorting thousands of pages and categories by language, prepared for a second wave of page imports to local wikis. On September 11, 2005, the wikisource.org wiki was reconfigured to enable theEnglish version, along with 8 other languages that were created early that morning and late the night before.[22]Three more languages were created on March 29, 2006,[23]and then another large wave of 14 language domains was created on June 2, 2006.[24] Languages without subdomains are locally incubated. As of September 2020[update], 182 languages arehosted locally. As of May 2025, there are Wikisource subdomains for 81 languages of which 79 are active and 2 are closed.[1]The active sites have 6,443,127 articles and the closed sites have 13 articles.[4]There are 5,053,593 registered users of which 2,674 are recently active.[4] The top ten Wikisource language projects by mainspace article count:[4] For a complete list with totals see Wikimedia Statistics:[25] During the move to language subdomains, the community requested that the mainwikisource.orgwebsite remain a functioning wiki, in order to serve three purposes: The idea of a project-specific coordination wiki, first realized at Wikisource, also took hold in another Wikimedia project, namely atWikiversity'sBeta Wiki. Like wikisource.org, it serves Wikiversity coordination in all languages, and as a language incubator, but unlike Wikisource, itsMain Pagedoes not serve as its multilingual portal.[27] Wikipedia co-founderLarry Sangercriticised Wikisource and sister projectWiktionaryin 2011, after he left the project, saying that their collaborative nature and technology means that there is no oversight by experts, and alleging that their content is therefore not reliable.[28] Bart D. Ehrman, a New Testament scholar and professor of religious studies at theUniversity of North Carolina at Chapel Hill, has criticised the English Wikisource's project to create a user-generated translation of the Bible saying "Democratization isn't necessarily good for scholarship."[20]Richard Elliott Friedman, an Old Testament scholar and professor of Jewish studies at theUniversity of Georgia, identified errors in the translation of theBook of Genesisas of 2008.[20] In 2010, Wikimedia France signed an agreement with theBibliothèque nationale de France(National Library of France) to add scans from its ownGallicadigital library to French Wikisource. Fourteen hundred public domain French texts were added to the Wikisource library as a result via upload to theWikimedia Commons. The quality of the transcriptions, previously automatically generated byoptical character recognition(OCR), was expected to be improved by Wikisource's human proofreaders.[29][30][31] In 2011, the English Wikisource received many high-quality scans of documents from the USNational Archives and Records Administration(NARA) as part of their efforts "to increase the accessibility and visibility of its holdings." Processing and upload to Commons of these documents, along with many images from the NARA collection, was facilitated by a NARAWikimedian in residence, Dominic McDevitt-Parks. Many of these documents have been transcribed and proofread by the Wikisource community and are featured as links in the National Archives' own online catalog.[32] Wikisource About Wikisource
https://en.wikipedia.org/wiki/Wikisource
Incomplex analysis, theRiemann mapping theoremstates that ifU{\displaystyle U}is anon-emptysimply connectedopen subsetof thecomplex number planeC{\displaystyle \mathbb {C} }which is not all ofC{\displaystyle \mathbb {C} }, then there exists abiholomorphicmappingf{\displaystyle f}(i.e. abijectiveholomorphicmapping whose inverse is also holomorphic) fromU{\displaystyle U}onto theopen unit disk This mapping is known as aRiemann mapping.[1] Intuitively, the condition thatU{\displaystyle U}be simply connected means thatU{\displaystyle U}does not contain any “holes”. The fact thatf{\displaystyle f}is biholomorphic implies that it is aconformal mapand therefore angle-preserving. Such a map may be interpreted as preserving the shape of any sufficiently small figure, while possibly rotating and scaling (but not reflecting) it. Henri Poincaréproved that the mapf{\displaystyle f}is unique up to rotation and recentering: ifz0{\displaystyle z_{0}}is an element ofU{\displaystyle U}andϕ{\displaystyle \phi }is an arbitrary angle, then there exists precisely onefas above such thatf(z0)=0{\displaystyle f(z_{0})=0}and such that theargumentof the derivative off{\displaystyle f}at the pointz0{\displaystyle z_{0}}is equal toϕ{\displaystyle \phi }. This is an easy consequence of theSchwarz lemma. As a corollary of the theorem, any two simply connected open subsets of theRiemann spherewhich both lack at least two points of the sphere can be conformally mapped into each other. The theorem was stated (under the assumption that theboundaryofU{\displaystyle U}is piecewise smooth) byBernhard Riemannin 1851 in his PhD thesis.Lars Ahlforswrote once, concerning the original formulation of the theorem, that it was “ultimately formulated in terms which would defy any attempt of proof, even with modern methods”.[2]Riemann's flawed proof depended on theDirichlet principle(which was named by Riemann himself), which was considered sound at the time. However,Karl Weierstrassfound that this principle was not universally valid. Later,David Hilbertwas able to prove that, to a large extent, the Dirichlet principle is valid under the hypothesis that Riemann was working with. However, in order to be valid, the Dirichlet principle needs certain hypotheses concerning the boundary ofU{\displaystyle U}(namely, that it is aJordan curve) which are not valid for simply connecteddomainsin general. The first rigorous proof of the theorem was given byWilliam Fogg Osgoodin 1900. He proved the existence ofGreen's functionon arbitrary simply connected domains other thanC{\displaystyle \mathbb {C} }itself; this established the Riemann mapping theorem.[3] Constantin Carathéodorygave another proof of the theorem in 1912, which was the first to rely purely on the methods of function theory rather thanpotential theory.[4]His proof used Montel's concept of normal families, which became the standard method of proof in textbooks.[5]Carathéodory continued in 1913 by resolving the additional question of whether the Riemann mapping between the domains can be extended to a homeomorphism of the boundaries (seeCarathéodory's theorem).[6] Carathéodory's proof usedRiemann surfacesand it was simplified byPaul Koebetwo years later in a way that did not require them. Another proof, due toLipót Fejérand toFrigyes Riesz, was published in 1922 and it was rather shorter than the previous ones. In this proof, like in Riemann's proof, the desired mapping was obtained as the solution of an extremal problem. The Fejér–Riesz proof was further simplified byAlexander Ostrowskiand by Carathéodory.[7] The following points detail the uniqueness and power of the Riemann mapping theorem: Theorem.For an open domainG⊂C{\displaystyle G\subset \mathbb {C} }the following conditions are equivalent:[10] (1) ⇒ (2) because any continuous closed curve, with base pointa∈G{\displaystyle a\in G}, can be continuously deformed to the constant curvea{\displaystyle a}. So the line integral offdz{\displaystyle f\,\mathrm {d} z}over the curve is0{\displaystyle 0}. (2) ⇒ (3) because the integral over any piecewise smooth pathγ{\displaystyle \gamma }froma{\displaystyle a}toz{\displaystyle z}can be used to define a primitive. (3) ⇒ (4) by integratingf−1df/dz{\displaystyle f^{-1}\,\mathrm {d} f/\mathrm {d} z}alongγ{\displaystyle \gamma }froma{\displaystyle a}tox{\displaystyle x}to give a branch of the logarithm. (4) ⇒ (5) by taking the square root asg(z)=exp⁡(f(x)/2){\displaystyle g(z)=\exp(f(x)/2)}wheref{\displaystyle f}is a holomorphic choice of logarithm. (5) ⇒ (6) because ifγ{\displaystyle \gamma }is a piecewise closed curve andfn{\displaystyle f_{n}}are successive square roots ofz−w{\displaystyle z-w}forw{\displaystyle w}outsideG{\displaystyle G}, then the winding number offn∘γ{\displaystyle f_{n}\circ \gamma }aboutw{\displaystyle w}is2n{\displaystyle 2^{n}}times the winding number ofγ{\displaystyle \gamma }about0{\displaystyle 0}. Hence the winding number ofγ{\displaystyle \gamma }aboutw{\displaystyle w}must be divisible by2n{\displaystyle 2^{n}}for alln{\displaystyle n}, so it must equal0{\displaystyle 0}. (6) ⇒ (7) for otherwise the extended planeC∪{∞}∖G{\displaystyle \mathbb {C} \cup \{\infty \}\setminus G}can be written as the disjoint union of two open and closed setsA{\displaystyle A}andB{\displaystyle B}with∞∈B{\displaystyle \infty \in B}andA{\displaystyle A}bounded. Letδ>0{\displaystyle \delta >0}be the shortest Euclidean distance betweenA{\displaystyle A}andB{\displaystyle B}and build a square grid onC{\displaystyle \mathbb {C} }with lengthδ/4{\displaystyle \delta /4}with a pointa{\displaystyle a}ofA{\displaystyle A}at the centre of a square. LetC{\displaystyle C}be the compact set of the union of all squares with distance≤δ/4{\displaystyle \leq \delta /4}fromA{\displaystyle A}. ThenC∩B=∅{\displaystyle C\cap B=\varnothing }and∂C{\displaystyle \partial C}does not meetA{\displaystyle A}orB{\displaystyle B}: it consists of finitely many horizontal and vertical segments inG{\displaystyle G}forming a finite number of closed rectangular pathsγj∈G{\displaystyle \gamma _{j}\in G}. TakingCi{\displaystyle C_{i}}to be all the squares coveringA{\displaystyle A}, then12π∫∂Cdarg(z−a){\displaystyle {\frac {1}{2\pi }}\int _{\partial C}\mathrm {d} \mathrm {arg} (z-a)}equals the sum of the winding numbers ofCi{\displaystyle C_{i}}overa{\displaystyle a}, thus giving1{\displaystyle 1}. On the other hand the sum of the winding numbers ofγj{\displaystyle \gamma _{j}}abouta{\displaystyle a}equals1{\displaystyle 1}. Hence the winding number of at least one of theγj{\displaystyle \gamma _{j}}abouta{\displaystyle a}is non-zero. (7) ⇒ (1) This is a purely topological argument. Letγ{\displaystyle \gamma }be a piecewise smooth closed curve based atz0∈G{\displaystyle z_{0}\in G}. By approximation γ is in the samehomotopyclass as a rectangular path on the square grid of lengthδ>0{\displaystyle \delta >0}based atz0{\displaystyle z_{0}}; such a rectangular path is determined by a succession ofN{\displaystyle N}consecutive directed vertical and horizontal sides. By induction onN{\displaystyle N}, such a path can be deformed to a constant path at a corner of the grid. If the path intersects at a pointz1{\displaystyle z_{1}}, then it breaks up into two rectangular paths of length<N{\displaystyle <N}, and thus can be deformed to the constant path atz1{\displaystyle z_{1}}by the induction hypothesis and elementary properties of thefundamental group. The reasoning follows a "northeast argument":[11][12]in the non self-intersecting path there will be a cornerz0{\displaystyle z_{0}}with largest real part (easterly) and then amongst those one with largest imaginary part (northerly). Reversing direction if need be, the path go fromz0−δ{\displaystyle z_{0}-\delta }toz0{\displaystyle z_{0}}and then tow0=z0−inδ{\displaystyle w_{0}=z_{0}-in\delta }forn≥1{\displaystyle n\geq 1}and then goes leftwards tow0−δ{\displaystyle w_{0}-\delta }. LetR{\displaystyle R}be the open rectangle with these vertices. The winding number of the path is0{\displaystyle 0}for points to the right of the vertical segment fromz0{\displaystyle z_{0}}tow0{\displaystyle w_{0}}and−1{\displaystyle -1}for points to the right; and hence insideR{\displaystyle R}. Since the winding number is0{\displaystyle 0}offG{\displaystyle G},R{\displaystyle R}lies inG{\displaystyle G}. Ifz{\displaystyle z}is a point of the path, it must lie inG{\displaystyle G}; ifz{\displaystyle z}is on∂R{\displaystyle \partial R}but not on the path, by continuity the winding number of the path aboutz{\displaystyle z}is−1{\displaystyle -1}, soz{\displaystyle z}must also lie inG{\displaystyle G}. HenceR∪∂R⊂G{\displaystyle R\cup \partial R\subset G}. But in this case the path can be deformed by replacing the three sides of the rectangle by the fourth, resulting in two less sides (with self-intersections permitted). Definitions.A familyF{\displaystyle {\cal {F}}}of holomorphic functions on an open domain is said to benormalif any sequence of functions inF{\displaystyle {\cal {F}}}has a subsequence that converges to a holomorphic function uniformly on compacta. A familyF{\displaystyle {\cal {F}}}iscompactif whenever a sequencefn{\displaystyle f_{n}}lies inF{\displaystyle {\cal {F}}}and converges uniformly tof{\displaystyle f}on compacta, thenf{\displaystyle f}also lies inF{\displaystyle {\cal {F}}}. A familyF{\displaystyle {\cal {F}}}is said to belocally boundedif their functions are uniformly bounded on each compact disk. Differentiating theCauchy integral formula, it follows that the derivatives of a locally bounded family are also locally bounded.[15][16] Remark.As a consequence of the Riemann mapping theorem, every simply connected domain in the plane is homeomorphic to the unit disk. If points are omitted, this follows from the theorem. For the whole plane, the homeomorphismϕ(z)=z/(1+|z|){\displaystyle \phi (z)=z/(1+|z|)}gives a homeomorphism ofC{\displaystyle \mathbb {C} }ontoD{\displaystyle D}. Koebe's uniformization theorem for normal families also generalizes to yield uniformizersf{\displaystyle f}for multiply-connected domains to finiteparallel slit domains, where the slits have angleθ{\displaystyle \theta }to thex-axis. Thus ifG{\displaystyle G}is a domain inC∪{∞}{\displaystyle \mathbb {C} \cup \{\infty \}}containing∞{\displaystyle \infty }and bounded by finitely many Jordan contours, there is a unique univalent functionf{\displaystyle f}onG{\displaystyle G}with near∞{\displaystyle \infty }, maximizingRe(e−2iθa1){\displaystyle \mathrm {Re} (e^{-2i\theta }a_{1})}and having imagef(G){\displaystyle f(G)}a parallel slit domain with angleθ{\displaystyle \theta }to thex-axis.[23][24][25] The first proof that parallel slit domains were canonical domains for in the multiply connected case was given byDavid Hilbertin 1909.Jenkins (1958), on his book on univalent functions and conformal mappings, gave a treatment based on the work ofHerbert GrötzschandRené de Posselfrom the early 1930s; it was the precursor ofquasiconformal mappingsandquadratic differentials, later developed as the technique ofextremal metricdue toOswald Teichmüller.[26]Menahem Schiffergave a treatment based on very generalvariational principles, summarised in addresses he gave to theInternational Congress of Mathematiciansin 1950 and 1958. In a theorem on "boundary variation" (to distinguish it from "interior variation"), he derived a differential equation and inequality, that relied on a measure-theoretic characterisation of straight-line segments due to Ughtred Shuttleworth Haslam-Jones from 1936. Haslam-Jones' proof was regarded as difficult and was only given a satisfactory proof in the mid-1970s by Schober and Campbell–Lamoureux.[27][28][29] Schiff (1993)gave a proof of uniformization for parallel slit domains which was similar to the Riemann mapping theorem. To simplify notation, horizontal slits will be taken. Firstly, byBieberbach's inequality, any univalent function withz{\displaystyle z}in the open unit disk must satisfy|c|≤2{\displaystyle |c|\leq 2}. As a consequence, if is univalent in|z|>R{\displaystyle |z|>R}, then|f(z)−a0|≤2|z|{\displaystyle |f(z)-a_{0}|\leq 2|z|}. To see this, takeS>R{\displaystyle S>R}and set forz{\displaystyle z}in the unit disk, choosingb{\displaystyle b}so the denominator is nowhere-vanishing, and apply theSchwarz lemma. Next the functionfR(z)=z+R2/z{\displaystyle f_{R}(z)=z+R^{2}/z}is characterized by an "extremal condition" as the unique univalent function inz>R{\displaystyle z>R}of the formz+a1z−1+⋯{\displaystyle z+a_{1}z^{-1}+\cdots }that maximisesRe(a1){\displaystyle \mathrm {Re} (a_{1})}: this is an immediate consequence ofGrönwall's area theorem, applied to the family of univalent functionsf(zR)/R{\displaystyle f(zR)/R}inz>1{\displaystyle z>1}.[30][31] To prove now that the multiply connected domainG⊂C∪{∞}{\displaystyle G\subset \mathbb {C} \cup \{\infty \}}can be uniformized by a horizontal parallel slit conformal mapping takeR{\displaystyle R}large enough that∂G{\displaystyle \partial G}lies in the open disk|z|<R{\displaystyle |z|<R}. ForS>R{\displaystyle S>R}, univalency and the estimate|f(z)|≤2|z|{\displaystyle |f(z)|\leq 2|z|}imply that, ifz{\displaystyle z}lies inG{\displaystyle G}with|z|≤S{\displaystyle |z|\leq S}, then|f(z)|≤2S{\displaystyle |f(z)|\leq 2S}. Since the family of univalentf{\displaystyle f}are locally bounded inG∖{∞}{\displaystyle G\setminus \{\infty \}}, by Montel's theorem they form a normal family. Furthermore iffn{\displaystyle f_{n}}is in the family and tends tof{\displaystyle f}uniformly on compacta, thenf{\displaystyle f}is also in the family and each coefficient of the Laurent expansion at∞{\displaystyle \infty }of thefn{\displaystyle f_{n}}tends to the corresponding coefficient off{\displaystyle f}. This applies in particular to the coefficient: so by compactness there is a univalentf{\displaystyle f}which maximizesRe(a1){\displaystyle \mathrm {Re} (a_{1})}. To check that is the required parallel slit transformation, supposereductio ad absurdumthatf(G)=G1{\displaystyle f(G)=G_{1}}has a compact and connected componentK{\displaystyle K}of its boundary which is not a horizontal slit. Then the complementG2{\displaystyle G_{2}}ofK{\displaystyle K}inC∪{∞}{\displaystyle \mathbb {C} \cup \{\infty \}}is simply connected withG2⊃G1{\displaystyle G_{2}\supset G_{1}}. By the Riemann mapping theorem there is a conformal mapping such thath(G2){\displaystyle h(G_{2})}isC{\displaystyle \mathbb {C} }with a horizontal slit removed. So we have that and thusRe(a1+b1)≤Re(a1){\displaystyle \mathrm {Re} (a_{1}+b_{1})\leq \mathrm {Re} (a_{1})}by the extremality off{\displaystyle f}. Therefore,Re(b1)≤0{\displaystyle \mathrm {Re} (b_{1})\leq 0}. On the other hand by the Riemann mapping theorem there is a conformal mapping mapping from|w|>S{\displaystyle |w|>S}ontoG2{\displaystyle G_{2}}. Then By the strict maximality for the slit mapping in the previous paragraph, we can see thatRe(c1)<Re(b1+c1){\displaystyle \mathrm {Re} (c_{1})<\mathrm {Re} (b_{1}+c_{1})}, so thatRe(b1)>0{\displaystyle \mathrm {Re} (b_{1})>0}. The two inequalities forRe(b1){\displaystyle \mathrm {Re} (b_{1})}are contradictory.[32][33][34] The proof of the uniqueness of the conformal parallel slit transformation is given inGoluzin (1969)andGrunsky (1978). Applying the inverse of theJoukowsky transformh{\displaystyle h}to the horizontal slit domain, it can be assumed thatG{\displaystyle G}is a domain bounded by the unit circleC0{\displaystyle C_{0}}and contains analytic arcsCi{\displaystyle C_{i}}and isolated points (the images of other the inverse of the Joukowsky transform under the other parallel horizontal slits). Thus, taking a fixeda∈G{\displaystyle a\in G}, there is a univalent mapping with its image a horizontal slit domain. Suppose thatF1(w){\displaystyle F_{1}(w)}is another uniformizer with The images underF0{\displaystyle F_{0}}orF1{\displaystyle F_{1}}of eachCi{\displaystyle C_{i}}have a fixedy-coordinate so are horizontal segments. On the other hand,F2(w)=F0(w)−F1(w){\displaystyle F_{2}(w)=F_{0}(w)-F_{1}(w)}is holomorphic inG{\displaystyle G}. If it is constant, then it must be identically zero sinceF2(a)=0{\displaystyle F_{2}(a)=0}. SupposeF2{\displaystyle F_{2}}is non-constant, then by assumptionF2(Ci){\displaystyle F_{2}(C_{i})}are all horizontal lines. Ift{\displaystyle t}is not in one of these lines,Cauchy's argument principleshows that the number of solutions ofF2(w)=t{\displaystyle F_{2}(w)=t}inG{\displaystyle G}is zero (anyt{\displaystyle t}will eventually be encircled by contours inG{\displaystyle G}close to theCi{\displaystyle C_{i}}'s). This contradicts the fact that the non-constant holomorphic functionF2{\displaystyle F_{2}}is anopen mapping.[35] GivenU{\displaystyle U}and a pointz0∈U{\displaystyle z_{0}\in U}, we want to construct a functionf{\displaystyle f}which mapsU{\displaystyle U}to the unit disk andz0{\displaystyle z_{0}}to0{\displaystyle 0}. For this sketch, we will assume thatUis bounded and its boundary is smooth, much like Riemann did. Write whereg=u+iv{\displaystyle g=u+iv}is some (to be determined) holomorphic function with real partu{\displaystyle u}and imaginary partv{\displaystyle v}. It is then clear thatz0{\displaystyle z_{0}}is the only zero off{\displaystyle f}. We require|f(z)|=1{\displaystyle |f(z)|=1}forz∈∂U{\displaystyle z\in \partial U}, so we need on the boundary. Sinceu{\displaystyle u}is the real part of a holomorphic function, we know thatu{\displaystyle u}is necessarily aharmonic function; i.e., it satisfiesLaplace's equation. The question then becomes: does a real-valued harmonic functionu{\displaystyle u}exist that is defined on all ofU{\displaystyle U}and has the given boundary condition? The positive answer is provided by theDirichlet principle. Once the existence ofu{\displaystyle u}has been established, theCauchy–Riemann equationsfor the holomorphic functiong{\displaystyle g}allow us to findv{\displaystyle v}(this argument depends on the assumption thatU{\displaystyle U}be simply connected). Onceu{\displaystyle u}andv{\displaystyle v}have been constructed, one has to check that the resulting functionf{\displaystyle f}does indeed have all the required properties.[36] The Riemann mapping theorem can be generalized to the context ofRiemann surfaces: IfU{\displaystyle U}is a non-empty simply-connected open subset of aRiemann surface, thenU{\displaystyle U}is biholomorphic to one of the following: theRiemann sphere, thecomplex planeC{\displaystyle \mathbb {C} }, or theunit diskD{\displaystyle D}. This is known as theuniformization theorem. In the case of a simply connected bounded domain with smooth boundary, the Riemann mapping function and all its derivatives extend by continuity to the closure of the domain. This can be proved using regularity properties of solutions of the Dirichlet boundary value problem, which follow either from the theory ofSobolev spaces for planar domainsor fromclassical potential theory. Other methods for proving the smooth Riemann mapping theorem include the theory of kernel functions[37]or theBeltrami equation. Computational conformal mapping is prominently featured in problems of applied analysis and mathematical physics, as well as in engineering disciplines, such as image processing. In the early 1980s an elementary algorithm for computing conformal maps was discovered. Given pointsz0,…,zn{\displaystyle z_{0},\ldots ,z_{n}}in the plane, the algorithm computes an explicit conformal map of the unit disk onto a region bounded by a Jordan curveγ{\displaystyle \gamma }withz0,…,zn∈γ.{\displaystyle z_{0},\ldots ,z_{n}\in \gamma .}This algorithm converges for Jordan regions[38]in the sense of uniformly close boundaries. There are corresponding uniform estimates on the closed region and the closed disc for the mapping functions and their inverses. Improved estimates are obtained if the data points lie on aC1{\displaystyle C^{1}}curve or aK-quasicircle. The algorithm was discovered as an approximate method for conformal welding; however, it can also be viewed as a discretization of theLoewner differential equation.[39] The following is known about numerically approximating the conformal mapping between two planar domains.[40] Positive results: Negative results:
https://en.wikipedia.org/wiki/Riemann_mapping_theorem
Thesensitivityof anelectronic device, such as acommunications systemreceiver, or detection device, such as aPIN diode, is the minimummagnitudeof inputsignalrequired to produce a specified output signal having a specifiedsignal-to-noise ratio, or other specified criteria. In general, it is the signal level required for a particular quality of received information.[1] Insignal processing, sensitivity also relates tobandwidthandnoise flooras is explained in more detail below. In the field of electronics different definitions are used for sensitivity. The IEEE dictionary[2][3]states: "Definitions of sensitivity fall into two contrasting categories." It also provides multiple definitions relevant to sensors among which 1: "(measuring devices) The ratio of the magnitude of its response to the magnitude of the quantity measured.” and 2: "(radio receiver or similar device) Taken as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio.”. The first of these definitions is similar to the definition ofresponsivityand as a consequence sensitivity is sometimes considered to be improperly used as a synonym forresponsivity,[4][5]and it is argued that the second definition, which is closely related to thedetection limit, is a better indicator of the performance of a measuring system.[6] To summarize, two contrasting definitions of sensitivity are used in the field of electronics The sensitivity of amicrophoneis usually expressed as thesoundfield strengthindecibels(dB) relative to 1V/Pa(Pa =N/m2) or as the transfer factor in millivolts perpascal(mV/Pa) into anopen circuitor into a 1 kiloohmload.[citation needed]The sensitivity of ahydrophoneis usually expressed as dB relative to 1 V/μPa.[7] The sensitivity of aloudspeakeris usually expressed as dB / 2.83 VRMSat 1 metre.[citation needed]This is not the same as theelectrical efficiency; seeEfficiency vs sensitivity. This is an example where sensitivity is defined as the ratio of the sensor's response to the quantity measured. One should realize that when using this definition to compare sensors, the sensitivity of the sensor might depend on components like output voltage amplifiers, that can increase the sensor response such that the sensitivity is not a pure figure of merit of the sensor alone, but of the combination of all components in the signal path from input to response. Sensitivity in a receiver, such aradio receiver, indicates its capability to extract information from a weak signal, quantified as the lowest signal level that can be useful.[8]It is mathematically defined as the minimum input signalSi{\displaystyle S_{i}}required to produce a specified signal-to-noise S/N ratio at the output port of the receiver and is defined as the mean noise power at the input port of the receiver times the minimum required signal-to-noise ratio at the output of the receiver: where The same formula can also be expressed in terms of noise factor of the receiver as where Because receiver sensitivity indicates how faint an input signal can be to be successfully received by the receiver, the lower power level, the better. Lower input signal power for a given S/N ratio means better sensitivity since the receiver's contribution to the noise is smaller. When the power is expressed in dBm the larger the absolute value of the negative number, the better the receive sensitivity. For example, a receiver sensitivity of −98dBmis better than a receive sensitivity of −95 dBm by 3 dB, or a factor of two. In other words, at a specified data rate, a receiver with a −98 dBm sensitivity can hear (or extract useable audio, video or data from) signals that are half the power of those heard by a receiver with a −95 dBm receiver sensitivity.[citation needed]. For electronic sensors the input signalSi{\textstyle S_{i}}can be of many types, like position, force, acceleration, pressure, or magnetic field. The output signal for an electronicanalogsensor is usually a voltage or a current signalSo{\textstyle S_{o}}. Theresponsivityof an ideal linear sensor in the absence of noise is defined asR=So/Si{\textstyle R=S_{o}/S_{i}}, whereas for nonlinear sensors it is defined as the local slopedSo/dSi{\displaystyle \mathrm {d} S_{o}/\mathrm {d} S_{i}}. In the absence of noise and signals at the input, the sensor is assumed to generate a constant intrinsic output noiseNoi{\textstyle N_{oi}}. To reach a specified signal to noise ratio at the outputSNRo=So/Noi{\displaystyle SNR_{o}=S_{o}/N_{oi}}, one combines these equations and obtains the following idealized equation for its sensitivity[5]S{\displaystyle S}, which is equal to the value of the input signalSi,SNRo{\textstyle S_{i,SNR_{o}}}that results in the specified signal-to-noise ratioSNRo{\displaystyle SNR_{o}}at the output: S=Si,SNRo=NoiRSNRo{\displaystyle S=S_{i,SNR_{o}}={\frac {N_{oi}}{R}}SNR_{o}} This equation shows that sensor sensitivity can be decreased (=improved) by either reducing the intrinsic noise of the sensorNoi{\textstyle N_{oi}}or by increasing its responsivityR{\textstyle R}. This is an example of a case where sensivity is defined as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio.[2]This definition has the advantage that the sensitivity is closely related to thedetection limitof a sensor if the minimum detectableSNRois specified (SNR). The choice for theSNRoused in the definition of sensitivity depends on the required confidence level for a signal to be reliably detected (confidence (statistics)), and lies typically between 1-10. The sensitivity depends on parameters likebandwidthBWor integration timeτ=1/(2BW)(as explained here:NEP), because noise level can be reduced bysignal averaging, usually resulting in a reduction of the noise amplitude asNoi∝1/τ{\displaystyle N_{oi}\propto 1/{\sqrt {\tau }}}whereτ{\displaystyle \tau }is the integration time over which the signal is averaged. A measure of sensitivity independent of bandwidth can be provided by using the amplitude or powerspectral densityof the noise and or signals (Si,So,Noi{\displaystyle S_{i},S_{o},N_{oi}}) in the definition, with units like m/Hz1/2, N/Hz1/2, W/Hz or V/Hz1/2. For awhite noisesignal over the sensor bandwidth, its power spectral density can be determined from the total noise powerNoi,tot{\displaystyle N_{oi,\mathrm {tot} }}(over the full bandwidth) using the equationNoi,PSD=Noi,tot/BW{\displaystyle N_{oi,\mathrm {PSD} }=N_{oi,\mathrm {tot} }/BW}. Its amplitude spectral density is the square-root of this valueNoi,ASD=Noi,PSD{\displaystyle N_{oi,\mathrm {ASD} }={\sqrt {N_{oi,\mathrm {PSD} }}}}. Note that in signal processing the words energy and power are also used for quantities that do not have the unit Watt (Energy (signal processing)). In some instruments, likespectrum analyzers, aSNRoof 1 at a specified bandwidth of 1 Hz is assumed by default when defining their sensitivity.[2]For instruments that measure power, which also includes photodetectors, this results in the sensitivity becoming equal to thenoise-equivalent powerand for other instruments it becomes equal to the noise-equivalent-input[9]NEI=Noi,ASD/R{\displaystyle NEI=N_{oi,ASD}/R}. A lower value of the sensitivity corresponds to better performance (smaller signals can be detected), which seems contrary to the common use of the word sensitivity where higher sensitivity corresponds to better performance.[6][10]It has therefore been argued that it is preferable to usedetectivity, which is the reciprocal of the noise-equivalent input, as a metric for the performance of detectors[9][11]D=R/Noi{\displaystyle D=R/N_{oi}}. As an example, consider apiezoresistiveforce sensor through which a constant current runs, such that it has a responsivityR=1.0V/N{\displaystyle R=1.0~\mathrm {V} /\mathrm {N} }. TheJohnson noiseof the resistor generates a noise amplitude spectral density ofNoi,ASD=10nV/Hz{\displaystyle N_{oi,{\textrm {ASD}}}=10~\mathrm {nV} /{\sqrt {\mathrm {Hz} }}}. For a specifiedSNRoof 1, this results in a sensitivity and noise-equivalent input ofSi,ASD=NEI=10nN/Hz{\displaystyle S_{i,ASD}=NEI=10~\mathrm {nN} /{\sqrt {\mathrm {Hz} }}}and a detectivity of(10nN/Hz)−1{\displaystyle (10~\mathrm {nN} /{\sqrt {\mathrm {Hz} }})^{-1}}, such that an input signal of 10 nN generates the same output voltage as the noise does over a bandwidth of 1 Hz. This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 2022-01-22.(in support ofMIL-STD-188).
https://en.wikipedia.org/wiki/Sensitivity_(electronics)
Datagram Transport Layer Security(DTLS) is acommunications protocolprovidingsecuritytodatagram-based applications by allowing them to communicate in a way designed[1][2][3]to preventeavesdropping,tampering, ormessage forgery. The DTLS protocol is based on thestream-orientedTransport Layer Security(TLS) protocol and is intended to provide similar security guarantees. The DTLS protocol datagram preserves the semantics of the underlying transport—the application does not suffer from the delays associated with stream protocols, but because it usesUser Datagram Protocol(UDP) orStream Control Transmission Protocol(SCTP), the application has to deal withpacket reordering, loss of datagram and data larger than the size of a datagramnetwork packet. Because DTLS uses UDP or SCTP rather than TCP it avoids theTCP meltdown problem[4][5]when being used to create a VPN tunnel. The following documents define DTLS: DTLS 1.0 is based on TLS 1.1, DTLS 1.2 is based on TLS 1.2, and DTLS 1.3 is based on TLS 1.3. There is no DTLS 1.1 because this version-number was skipped in order to harmonize version numbers with TLS.[2]Like previous DTLS versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability".[11] In February 2013 two researchers from Royal Holloway, University of London discovered a timing attack[46]which allowed them to recover (parts of the) plaintext from a DTLS connection using the OpenSSL or GnuTLS implementation of DTLS whenCipher Block Chainingmode encryption was used.
https://en.wikipedia.org/wiki/DTLS
TheHewitt–Savage zero–one lawis atheoreminprobability theory, similar toKolmogorov's zero–one lawand theBorel–Cantelli lemma, that specifies that a certain type of event will eitheralmost surelyhappen or almost surely not happen. It is sometimes known as theSavage-Hewitt law for symmetric events. It is named afterEdwin HewittandLeonard Jimmie Savage.[1] Let{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}be asequenceofindependent and identically-distributed random variablestaking values in a setX{\displaystyle \mathbb {X} }. The Hewitt-Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finitepermutationsof the indices, hasprobabilityeither 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed). Somewhat more abstractly, define theexchangeable sigma algebraorsigma algebra of symmetric eventsE{\displaystyle {\mathcal {E}}}to be the set of events (depending on the sequence of variables{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}) which are invariant underfinitepermutationsof the indices in the sequence{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}. ThenA∈E⟹P(A)∈{0,1}{\displaystyle A\in {\mathcal {E}}\implies \mathbb {P} (A)\in \{0,1\}}. Since any finite permutation can be written as a product oftranspositions, if we wish to check whether or not an eventA{\displaystyle A}is symmetric (lies inE{\displaystyle {\mathcal {E}}}), it is enough to check if its occurrence is unchanged by an arbitrary transposition(i,j){\displaystyle (i,j)},i,j∈N{\displaystyle i,j\in \mathbb {N} }. Let the sequence{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}of independent and identically distributed random variables take values in[0,∞){\displaystyle [0,\infty )}. Then the event that the series∑n=1∞Xn{\displaystyle \sum _{n=1}^{\infty }X_{n}}converges (to a finite value) is a symmetric event inE{\displaystyle {\mathcal {E}}}, since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the commonexpected valueE[Xn]>0{\displaystyle \mathbb {E} [X_{n}]>0}(which essentially means thatP(Xn=0)<1{\displaystyle \mathbb {P} (X_{n}=0)<1}because of the random variables' non-negativity), we may conclude that i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determinewhichof these two extreme values is the correct one. Continuing with the previous example, define which is the position at stepNof arandom walkwith theiidincrementsXn. The event {SN= 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence (SN), butSNare not independent and therefore theKolmogorov's zero–one lawis not directly applicable here.[2]
https://en.wikipedia.org/wiki/Hewitt%E2%80%93Savage_zero%E2%80%93one_law
Wi-Fi 6, orIEEE 802.11ax, is anIEEEstandard from theWi-Fi Alliance, for wireless networks (WLANs). It operates in the 2.4 GHz and 5 GHz bands,[4]with an extended version,Wi-Fi 6E, that adds the 6 GHz band.[5]It is an upgrade from Wi-Fi 5 (802.11ac), with improvements for better performance in crowded places. Wi-Fi 6 covers frequencies inlicense-exempt bandsbetween 1 and 7.125 GHz, including the commonly used 2.4 GHz and 5 GHz, as well as the broader6 GHz band.[6] This standard aims to boost data speed (throughput-per-area[d]) in crowded places like offices and malls. Though the nominal data rate is only 37%[7]better than 802.11ac, the total network speed increases by 300%,[8]making it more efficient and reducing latency by 75%.[9]The quadrupling of overall throughput is made possible by a higherspectral efficiency. 802.11ax Wi-Fi has a main feature calledOFDMA, similar to howcell technologyworks withWi-Fi.[7]This brings better spectrum use, improved power control to avoid interference, and enhancements like 1024‑QAM,MIMOandMU-MIMOfor faster speeds. There are also reliability improvements such as lower power consumption and security protocols likeTarget Wake TimeandWPA3. The 802.11ax standard was approved on September 1, 2020, with Draft 8 getting 95% approval. Subsequently, on February 1, 2021, the standard received official endorsement from the IEEE Standards Board.[10] Notes In 802.11ac (802.11's previous amendment),multi-user MIMOwas introduced, which is aspatial multiplexingtechnique. MU-MIMO allows the access point to form beams towards eachclient, while transmitting information simultaneously. By doing so, the interference between clients is reduced, and the overall throughput is increased, since multiple clients can receive data simultaneously. With 802.11ax, a similar multiplexing is introduced in thefrequency-division multiplexing:OFDMA. With OFDMA, multiple clients are assigned to differentResource Unitsin the available spectrum. By doing so, an 80 MHz channel can be split into multiple Resource Units, so that multiple clients receive different types of data over the same spectrum, simultaneously. To supportOFDMA, 802.11ax needs four times as many subcarriers as 802.11ac. Specifically, for 20, 40, 80, and 160 MHz channels, the 802.11ac standard has, respectively, 64, 128, 256 and 512 subcarriers while the 802.11ax standard has 256, 512, 1024, and 2048 subcarriers. Since the available bandwidths have not changed and the number of subcarriers increases by a factor of four, thesubcarrier spacingis reduced by the same factor. This introduces OFDM symbols that are four times longer: in 802.11ac, an OFDM symbol takes 3.2 microseconds to transmit. In 802.11ax, it takes 12.8 microseconds (both withoutguard intervals). The 802.11ax amendment brings several key improvements over802.11ac. 802.11ax addresses frequency bands between 1 GHz and 6 GHz.[11]Therefore, unlike 802.11ac, 802.11ax also operates in the unlicensed 2.4 GHz band. Wi-Fi 6E introduces operation at frequencies of or near 6 GHz, and superwide channels that are 160 MHz wide,[12]the frequency ranges these channels can occupy and the number of these channels depends on the country the Wi-Fi 6 network operates in.[13]To meet the goal of supporting dense 802.11 deployments, the following features have been approved.
https://en.wikipedia.org/wiki/IEEE_802.11ax#Rate_set
Nahuatlhas been in intensecontactwithSpanishsince the Spanish conquest of 1521. Since that time, there have been a large number of Spanishloanwordsintroduced to the language, loans which span from nouns and verbs to adjectives and particles.Syntactical constructionshave also been borrowed into Nahuatl from Spanish, through which the latter language has exertedtypologicalpressure on the form such that Nahuatl and Spanish are exhibiting syntactic and typological convergence. Today, hardly any Nahuatl monolinguals remain, and the language has undergone extreme shift to Spanish, such that some consider it be on the way to extinction.[citation needed] The Nahuatl and Spanish languages have coexisted in stable contact for over 500 years in centralMexico. This long, well-documented period of contact provides some of the best linguistic evidence for contact-induced grammatical change. That is to say, Spanish seems to have exerted a profound influence on the Nahuatl language, but despite the extreme duration of their contact Nahuatl has only recently begun to show signs of language shift. This shift is progressing at a startling rate. Though Nahuatl still has over a million speakers, it is considered by some linguists to be endangered and on the way to extinction.[1]As with regional languages the world over, Nahuatl finds itself being replaced by a‘world’ language, Spanish, as other small linguistic communities have shifted to languages likeEnglishandChinese. The world's loss in linguistic diversity can be tied to its changing economic and political conditions, as the model of industrial capitalism under a culturally homogenizing nation state spread throughout the world and culture becomes more and more global rather than regional. However, we do not always know exactly why some local or ‘traditional’ languages are clung to and preserved while others vanish much more quickly.[1] Contact in earnest between the two languages began along with the beginning of theSpanish conquest of Mexicoin 1519. Prior to that, Nahuatl existed as the dominant language of much of central, southern, and western Mexico, the language of the dominantAztec cultureandMexicaethnic group. Though the Spanish tried to eradicate much of Mexica culture after their defeat of the Mexica Aztec in their capital ofTenochtitlanin 1521, Aztec culture and the Nahuatl language were spread among a variety of ethnic groups in Mexico, some of whom, like theTlaxcala, were instrumental allies of the Spanish in their defeat of the Aztec empire. Since a large part of the surviving indigenous population, whom the Spanish hoped toChristianizeand assimilate, were part of the now fragmented Aztec culture and thus speakers of Nahuatl, the Spanish missionaries recognized that they would continue to need the help of their Nahuatl speaking indigenous allies, and allowed them some relative autonomy in exchange for their help in conquering and Christianizing the remainder of the territory, in parts of which indigenous populations remained hostile throughout the 16th century. In this way, friendly Nahuatl speaking communities were valuable in their role as intermediaries between the Spanish and other indigenous groups. Though the Spanish issued many decrees throughout the centuries discouraging the uses of native tongues, such decrees were difficult to enforce, and often counter-productive to the goals of the missionary and military forces actually interacting with the indigenous populations.[2][full citation needed]That is early SpanishFranciscan missionariesbelieved mutual comprehension between converter and convertee to be essential to a successfulChristianization. Many such missionaries learned Nahuatl and developed a system of writing for the language with theLatin alphabet, enabling them to transcribe many works of classical Nahuatl poetry and mythology, preserving the older, pre-contact varieties of the language. Thanks to the efforts of these early missionaries, there are documented sources of the Classical Nahuatl language dating back to the 1540s, which have enabled a systematic investigation of the changes it has undergone over the centuries under the influence of Spanish. Learning Nahuatl also enabled missionaries to teach theChristian gospelto American Indians using evangelical materials prepared in the indigenous language and using indigenous concepts, a technique which certain sects, particularly theJesuits, believed was often met with better results.[3][page needed] Thus, though Nahuatl usage was discouraged officially, its use was actually preserved and encouraged by the Spanish in religious, scholarly, and civil spheres into the late 18th century, until theSpanish monarchybegan to take a more hard-line approach towards assimilating indigenous populations into the state. By the time of the 1895 census, there were still 659,865 Mexican citizens who reported themselves to be monolingual Nahuatl speakers, which group represented 32.1% of the total indigenous-speaking population, but over the next century the number of monolingual Nahuatl speakers would decline. By 1930 there were reportedly 355,295 monolingual speakers, and as of the 2000 census there remain only approximately 220,000 monolinguals among the 1.5 million Nahuatl speakers, the vast majority of whom being middle-aged or elderly.[2]At this point, Spanish was well-integrated into most Mexicano communities, andlanguage shiftwas rapidly occurring among the younger generations. As there came to be greater and greater degrees of interaction between Indian and Hispanic communities and with it greaterbilingualismand language contact, the Mexicano language changed typologically to converge with Spanish and ease the incorporation of Spanish material. Through the first few years of language contact, most Spanish influence of Nahuatl consisted in simple lexical borrowings of nouns related to the emergent material exchange between the two cultures. However, where the influence of Nahuatl on Mexican Spanish largely stopped at the level of basic lexical borrowings, Nahuatl continued on to borrow more grammatical words: verbs, adjectives, adverbs, prepositions, connective particles, anddiscourse markers. Incorporation of such borrowed materials into the native language made forced some grammatical alterations to accommodate structures that would not be possible within Classical Nahuatl.[4][full citation needed]The net effect of such alterations was that “Nahuatl by 1700 or 1720 had become capable in principle of absorbing any Spanish word or construction. The rest has been done by continued, ever growing cultural pressures, bringing in more words and phrases as the two bodies of speakers became more intertwined and bilingualism increased.”[3][page needed] To be more specific, there has been convergence in word order, level of agglutination, and incorporation of Spanish grammatical particles and discourse markers in Nahuatl speech. For instance, whereas Nahuatl had an adjective-noun word order Mexicano follows Spanish in its noun-adjective word order.[4]This may be due to the borrowing of phrases from Spanish incorporating the Spanish particle de, in phrases of the form NOUN + de + ADJ, e.g. aretes de oro meaning ‘gold earrings’.[5]Furthermore, the incorporation of de in Nahuatl may have also influenced a parallel shift from modifier-head to head-modifier in possessive constructions. That is, Classical Nahuatl had a possessor-possessum/noun-genitive order, but infiltration of the Spanish particle de may have also driven the shift to possessum-possessor/genitive-noun order. Whereas previously possessive noun clauses in the Nahuatl were markedly introduced by an inflected possessum plus and adjunctor in and a possessor, or unmarkedly simply by an possessor and inflected possessum (that is, the in particle was inserted in front of the displaced element of a marked word order), Nahuatl speakers have analyzed the loan de as functioning in a parallel fashion to native in, and used it to create unmarked possessum/possessor constructions in the manner of Spanish, without genitive inflections. So far this change has only spread to constructions with inanimate possessors, but it may extend further to compete with and displace Classical varieties and complete syntactic convergence.[6]Parallel incorporation of Spanish prepositions in the previously postpositional language, along with Mexican Spanish discourse markers like pero, este, bueno, and pues have resulted in a modern Mexicano language that can sound strikingly Spanish in terms of its sentence frames, rhythm, and vocabulary.[7][full citation needed]Finally, someagglutinative tendenciesof Nahuatl have faded in contemporary dialects. For instance, in Nahuatl there was a tendency to incorporate nouns into verbs as sorts of adverbial modifiers which is losing productivity.[8]One could tortilla-make for instance. Verbs generally were accompanied by a wide variety of objective, instrumental, tense, and aspect markers. One commonly would agglutinatively indicate directional purposivity, for instance, but such constructions are now more commonly made with a periphrastic Spanish calque of the form GO + (bare) INF (a la ir + INF). Such disincorporation of verbal modifiers into periphrastic expressions on analogy with Spanish forms indicates a shift towards a more analytic style characteristic of Hispanic speech.
https://en.wikipedia.org/wiki/Nahuatl-Spanish_Contact
Aprime number(or aprime) is anatural numbergreater than 1 that is not aproductof two smaller natural numbers. A natural number greater than 1 that is not prime is called acomposite number. For example, 5 is prime because the only ways of writing it as a product,1 × 5or5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central innumber theorybecause of thefundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can befactorizedas a product of primes that is uniqueup totheir order. The property of being prime is calledprimality. A simple but slowmethod of checking the primalityof a given number⁠n{\displaystyle n}⁠, calledtrial division, tests whether⁠n{\displaystyle n}⁠is a multiple of any integer between 2 and⁠n{\displaystyle {\sqrt {n}}}⁠. Faster algorithms include theMiller–Rabin primality test, which is fast but has a small chance of error, and theAKS primality test, which always produces the correct answer inpolynomial timebut is too slow to be practical. Particularly fast methods are available for numbers of special forms, such asMersenne numbers. As of October 2024[update]thelargest known prime numberis a Mersenne prime with 41,024,320decimal digits.[1][2] There areinfinitely manyprimes, asdemonstrated by Euclidaround 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is theprime number theorem, proven at the end of the 19th century, which says roughly that theprobabilityof a randomly chosen large number being prime is inverselyproportionalto its number of digits, that is, to itslogarithm. Several historical questions regarding prime numbers are still unsolved. These includeGoldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and thetwin primeconjecture, that there are infinitely many pairs of primes that differ by two. Such questions spurred the development of various branches of number theory, focusing onanalyticoralgebraicaspects of numbers. Primes are used in several routines ininformation technology, such aspublic-key cryptography, which relies on the difficulty offactoringlarge numbers into their prime factors. Inabstract algebra, objects that behave in a generalized way like prime numbers includeprime elementsandprime ideals. Anatural number(1, 2, 3, 4, 5, 6, etc.) is called aprime number(or aprime) if it is greater than 1 and cannot be written as the product of two smaller natural numbers. The numbers greater than 1 that are not prime are calledcomposite numbers.[3]In other words,⁠n{\displaystyle n}⁠is prime if⁠n{\displaystyle n}⁠items cannot be divided up into smaller equal-size groups of more than one item,[4]or if it is not possible to arrange⁠n{\displaystyle n}⁠dots into a rectangular grid that is more than one dot wide and more than one dot high.[5]For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers,[6]as there are no other numbers that divide them evenly (without a remainder). 1 is not prime, as it is specifically excluded in the definition.4 = 2 × 2and6 = 2 × 3are both composite. Thedivisorsof a natural number⁠n{\displaystyle n}⁠are the natural numbers that divide⁠n{\displaystyle n}⁠evenly. Every natural number has both 1 and itself as a divisor. If it has any other divisor, it cannot be prime. This leads to an equivalent definition of prime numbers: they are the numbers with exactly two positivedivisors. Those two are 1 and the number itself. As 1 has only one divisor, itself, it is not prime by this definition.[7]Yet another way to express the same thing is that a number⁠n{\displaystyle n}⁠is prime if it is greater than one and if none of the numbers2,3,…,n−1{\displaystyle 2,3,\dots ,n-1}divides⁠n{\displaystyle n}⁠evenly.[8] The first 25 prime numbers (all the prime numbers less than 100) are:[9] Noeven number⁠n{\displaystyle n}⁠greater than 2 is prime because any such number can be expressed as the product⁠2×n/2{\displaystyle 2\times n/2}⁠. Therefore, every prime number other than 2 is anodd number, and is called anodd prime.[10]Similarly, when written in the usualdecimalsystem, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5.[11] Thesetof all primes is sometimes denoted byP{\displaystyle \mathbf {P} }(aboldfacecapital P)[12]or byP{\displaystyle \mathbb {P} }(ablackboard boldcapital P).[13] TheRhind Mathematical Papyrus, from around 1550 BC, hasEgyptian fractionexpansions of different forms for prime and composite numbers.[14]However, the earliest surviving records of the study of prime numbers come from theancient Greek mathematicians, who called themprōtos arithmòs(πρῶτος ἀριθμὸς).Euclid'sElements(c. 300 BC) proves theinfinitude of primesand thefundamental theorem of arithmetic, and shows how to construct aperfect numberfrom aMersenne prime.[15]Another Greek invention, theSieve of Eratosthenes, is still used to construct lists ofprimes.[16][17] Around 1000 AD, theIslamicmathematicianIbn al-Haytham(Alhazen) foundWilson's theorem, characterizing the prime numbers as the numbers⁠n{\displaystyle n}⁠that evenly divide⁠(n−1)!+1{\displaystyle (n-1)!+1}⁠. He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it.[18]Another Islamic mathematician,Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by considering only the prime divisors up to the square root of the upper limit.[17]Fibonaccitook the innovations from Islamic mathematics to Europe. His bookLiber Abaci(1202) was the first to describetrial divisionfor testing primality, again using divisors only up to the square root.[17] In 1640Pierre de Fermatstated (without proof)Fermat's little theorem(later proved byLeibnizandEuler).[19]Fermat also investigated the primality of theFermat numbers⁠22n+1{\displaystyle 2^{2^{n}}+1}⁠,[20]andMarin Mersennestudied theMersenne primes, prime numbers of the form2p−1{\displaystyle 2^{p}-1}with⁠p{\displaystyle p}⁠itself a prime.[21]Christian GoldbachformulatedGoldbach's conjecture, that every even number is the sum of two primes, in a 1742 letter to Euler.[22]Euler proved Alhazen's conjecture (now theEuclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes.[15]He introduced methods frommathematical analysisto this area in his proofs of the infinitude of the primes and thedivergence of the sum of the reciprocals of the primes⁠12+13+15+17+111+⋯{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{5}}+{\tfrac {1}{7}}+{\tfrac {1}{11}}+\cdots }⁠.[23]At the start of the 19th century, Legendre and Gauss conjectured that as⁠x{\displaystyle x}⁠tends to infinity, the number of primes up to⁠x{\displaystyle x}⁠isasymptoticto⁠x/log⁡x{\displaystyle x/\log x}⁠, wherelog⁡x{\displaystyle \log x}is thenatural logarithmof⁠x{\displaystyle x}⁠. A weaker consequence of this high density of primes wasBertrand's postulate, that for everyn>1{\displaystyle n>1}there is a prime between⁠n{\displaystyle n}⁠and⁠2n{\displaystyle 2n}⁠, proved in 1852 byPafnuty Chebyshev.[24]Ideas ofBernhard Riemannin his1859 paper on the zeta-functionsketched an outline for proving the conjecture of Legendre and Gauss. Although the closely relatedRiemann hypothesisremains unproven, Riemann's outline was completed in 1896 byHadamardandde la Vallée Poussin, and the result is now known as theprime number theorem.[25]Another important 19th century result wasDirichlet's theorem on arithmetic progressions, that certainarithmetic progressionscontain infinitely many primes.[26] Many mathematicians have worked onprimality testsfor numbers larger than those where trial division is practicably applicable. Methods that are restricted to specific number forms includePépin's testfor Fermat numbers (1877),[27]Proth's theorem(c. 1878),[28]theLucas–Lehmer primality test(originated 1856), and the generalizedLucas primality test.[17] Since 1951 all thelargest known primeshave been found using these tests oncomputers.[a]The search for ever larger primes has generated interest outside mathematical circles, through theGreat Internet Mersenne Prime Searchand otherdistributed computingprojects.[9][30]The idea that prime numbers had few applications outside ofpure mathematics[b]was shattered in the 1970s whenpublic-key cryptographyand theRSAcryptosystem were invented, using prime numbers as their basis.[33] The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form.[16][34][35]The mathematical theory of prime numbers also moved forward with theGreen–Tao theorem(2004) that there are arbitrarily long arithmetic progressions of prime numbers, andYitang Zhang's 2013 proof that there exist infinitely manyprime gapsof bounded size.[36] Most early Greeks did not even consider 1 to be a number,[37][38]so they could not consider its primality. A few scholars in the Greek and later Roman tradition, includingNicomachus,Iamblichus,Boethius, andCassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider⁠2{\displaystyle 2}⁠to be prime either. However, Euclid and a majority of the other Greek mathematicians considered⁠2{\displaystyle 2}⁠as prime. Themedieval Islamic mathematicianslargely followed the Greeks in viewing 1 as not being a number.[37]By the Middle Ages and Renaissance, mathematicians began treating 1 as a number, and by the 17th century some of them included it as the first prime number.[39]In the mid-18th century,Christian Goldbachlisted 1 as prime in his correspondence withLeonhard Euler;[40]however, Euler himself did not consider 1 to be prime.[41]Many 19th century mathematicians still considered 1 to be prime,[42]andDerrick Norman Lehmerincluded 1 in hislist of primes less than ten millionpublished in 1914.[43]Lists of primes that included 1 continued to be published as recentlyas 1956.[44][45]However, by the early 20th century mathematicians began to agree that 1 should not be listed as prime, but rather in its own special category as a "unit".[42] If 1 were to be considered a prime, many statements involving primes would need to be awkwardly reworded. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than 1, because every number would have multiple factorizations with any number of copies of 1.[42]Similarly, thesieve of Eratostheneswould not work correctly if it handled 1 as a prime, because it would eliminate all multiples of 1 (that is, all other numbers) and output only the single number 1.[45]Some other more technical properties of prime numbers also do not hold for the number 1: for instance, the formulas forEuler's totient functionor for thesum of divisors functionare different for prime numbers than they are for 1.[46] Writing a number as a product of prime numbers is called aprime factorizationof the number. For example: The terms in the product are calledprime factors. The same prime factor may occur more than once; this example has two copies of the prime factor5.{\displaystyle 5.}When a prime occurs multiple times,exponentiationcan be used to group together multiple copies of the same prime number: for example, in the second way of writing the product above,52{\displaystyle 5^{2}}denotes thesquareor second power of⁠>5{\displaystyle >5}⁠. The central importance of prime numbers to number theory and mathematics in general stems from thefundamental theorem of arithmetic.[47]This theorem states that every integer larger than 1 can be written as a product of one or more primes. More strongly, this product is unique in the sense that any two prime factorizations of the same number will have the same numbers of copies of the same primes, although their ordering may differ.[48]So, although there are many different ways of finding a factorization using aninteger factorizationalgorithm, they all must produce the same result. Primes can thus be considered the "basic building blocks" of the natural numbers.[49] Some proofs of the uniqueness of prime factorizations are based onEuclid's lemma: If⁠p{\displaystyle p}⁠is a prime number and⁠p{\displaystyle p}⁠divides a productab{\displaystyle ab}of integers⁠a{\displaystyle a}⁠andb,{\displaystyle b,}then⁠p{\displaystyle p}⁠divides⁠a{\displaystyle a}⁠or⁠p{\displaystyle p}⁠divides⁠b{\displaystyle b}⁠(or both).[50]Conversely, if a number⁠p{\displaystyle p}⁠has the property that when it divides a product it always divides at least one factor of the product, then⁠p{\displaystyle p}⁠must be prime.[51] There areinfinitelymany prime numbers. Another way of saying this is that the sequence of prime numbers never ends. This statement is referred to asEuclid's theoremin honor of the ancient Greek mathematicianEuclid, since the first known proof for this statement is attributed to him. Many more proofs of the infinitude of primes are known, including ananalyticalproof byEuler,Goldbach'sproofbased onFermat numbers,[52]Furstenberg'sproof using general topology,[53]andKummer'selegant proof.[54] Euclid's proof[55]shows that everyfinite listof primes is incomplete. The key idea is to multiply together the primes in any given list and add1.{\displaystyle 1.}If the list consists of the primesp1,p2,…,pn,{\displaystyle p_{1},p_{2},\ldots ,p_{n},}this gives the number By the fundamental theorem,⁠N{\displaystyle N}⁠has a prime factorization with one or more prime factors.⁠N{\displaystyle N}⁠is evenly divisible by each of these factors, but⁠N{\displaystyle N}⁠has a remainder of one when divided by any of the prime numbers in the given list, so none of the prime factors of⁠N{\displaystyle N}⁠can be in the given list. Because there is no finite list of all the primes, there must be infinitely many primes. The numbers formed by adding one to the products of the smallest primes are calledEuclid numbers.[56]The first five of them are prime, but the sixth, is a composite number. There is no known efficient formula for primes. For example, there is no non-constantpolynomial, even in several variables, that takesonlyprime values.[57]However, there are numerous expressions that do encode all primes, or only primes. One possible formula is based onWilson's theoremand generates the number 2 many times and all other primes exactly once.[58]There is also a set ofDiophantine equationsin nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. This can be used to obtain a single formula with the property that all itspositivevalues are prime.[57] Other examples of prime-generating formulas come fromMills' theoremand a theorem ofWright. These assert that there are real constantsA>1{\displaystyle A>1}andμ{\displaystyle \mu }such that are prime for any natural number⁠n{\displaystyle n}⁠in the first formula, and any number of exponents in the second formula.[59]Here⌊⋅⌋{\displaystyle \lfloor {}\cdot {}\rfloor }represents thefloor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of⁠A{\displaystyle A}⁠orμ.{\displaystyle \mu .}[57] Many conjectures revolving about primes have been posed. Often having an elementary formulation, many of these conjectures have withstood proof for decades: all four ofLandau's problemsfrom 1912 are still unsolved.[60]One of them isGoldbach's conjecture, which asserts that every even integer⁠n{\displaystyle n}⁠greater than⁠2{\displaystyle 2}⁠can be written as a sum of two primes.[61]As of 2014[update], this conjecture has been verified for all numbers up ton=4⋅1018.{\displaystyle n=4\cdot 10^{18}.}[62]Weaker statements than this have been proven; for example,Vinogradov's theoremsays that every sufficiently large odd integer can be written as a sum of three primes.[63]Chen's theoremsays that every sufficiently large even number can be expressed as the sum of a prime and asemiprime(the product of two primes).[64]Also, any even integer greater than 10 can be written as the sum of six primes.[65]The branch of number theory studying such questions is calledadditive number theory.[66] Another type of problem concernsprime gaps, the differences between consecutive primes. The existence of arbitrarily large prime gaps can be seen by noting that the sequencen!+2,n!+3,…,n!+n{\displaystyle n!+2,n!+3,\dots ,n!+n}consists ofn−1{\displaystyle n-1}composite numbers, for any natural numbern.{\displaystyle n.}[67]However, large prime gaps occur much earlier than this argument shows.[68]For example, the first prime gap of length 8 is between the primes 89 and 97,[69]much smaller than8!=40320.{\displaystyle 8!=40320.}It is conjectured that there are infinitely manytwin primes, pairs of primes with difference 2; this is thetwin prime conjecture.Polignac's conjecturestates more generally that for every positive integerk,{\displaystyle k,}there are infinitely many pairs of consecutive primes that differ by2k.{\displaystyle 2k.}[70]Andrica's conjecture,[70]Brocard's conjecture,[71]Legendre's conjecture,[72]andOppermann's conjecture[71]all suggest that the largest gaps between primes from 1 to⁠n{\displaystyle n}⁠should be at most approximatelyn,{\displaystyle {\sqrt {n}},}a result that is known to follow from the Riemann hypothesis, while the much strongerCramér conjecturesets the largest gap size at⁠O((log⁡n)2){\displaystyle O((\log n)^{2})}⁠.[70]Prime gaps can be generalized toprime⁠k{\displaystyle k}⁠-tuples, patterns in the differences among more than two prime numbers. Their infinitude and density are the subject of thefirst Hardy–Littlewood conjecture, which can be motivated by theheuristicthat the prime numbers behave similarly to a random sequence of numbers with density given by the prime number theorem.[73] Analytic number theorystudies number theory through the lens ofcontinuous functions,limits,infinite series, and the related mathematics of the infinite andinfinitesimal. This area of study began withLeonhard Eulerand his first major result, the solution to theBasel problem. The problem asked for the value of the infinite sum1+14+19+116+…,{\displaystyle 1+{\tfrac {1}{4}}+{\tfrac {1}{9}}+{\tfrac {1}{16}}+\dots ,}which today can be recognized as the valueζ(2){\displaystyle \zeta (2)}of theRiemann zeta function. This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, theRiemann hypothesis. Euler showed that⁠ζ(2)=π2/6{\displaystyle \zeta (2)=\pi ^{2}/6}⁠.[74]The reciprocal of this number,⁠6/π2{\displaystyle 6/\pi ^{2}}⁠, is the limiting probability that two random numbers selected uniformly from a large range arerelatively prime(have no factors in common).[75] The distribution of primes in the large, such as the question how many primes are smaller than a given, large threshold, is described by theprime number theorem, but no efficientformula for the⁠n{\displaystyle n}⁠-th primeis known.Dirichlet's theorem on arithmetic progressions, in its basic form, asserts that linear polynomials with relatively prime integers⁠a{\displaystyle a}⁠and⁠b{\displaystyle b}⁠take infinitely many prime values. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the same⁠b{\displaystyle b}⁠have approximately the same proportions of primes. Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often. Euler's proof that there are infinitely many primesconsiders the sums ofreciprocalsof primes, Euler showed that, for any arbitraryreal number⁠x{\displaystyle x}⁠, there exists a prime⁠p{\displaystyle p}⁠for which this sum is greater than⁠x{\displaystyle x}⁠.[76]This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past every⁠x{\displaystyle x}⁠. The growth rate of this sum is described more precisely byMertens' second theorem.[77]For comparison, the sum does not grow to infinity as⁠n{\displaystyle n}⁠goes to infinity (see theBasel problem). In this sense, prime numbers occur more often than squares of natural numbers, although both sets are infinite.[78]Brun's theoremstates that the sum of the reciprocals oftwin primes, is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve thetwin prime conjecture, that there exist infinitely many twin primes.[78] Theprime-counting functionπ(n){\displaystyle \pi (n)}is defined as the number of primes not greater than⁠n{\displaystyle n}⁠.[79]For example,⁠π(11)=5{\displaystyle \pi (11)=5}⁠, since there are five primes less than or equal to 11. Methods such as theMeissel–Lehmer algorithmcan compute exact values ofπ(n){\displaystyle \pi (n)}faster than it would be possible to list each prime up to⁠n{\displaystyle n}⁠.[80]Theprime number theoremstates thatπ(n){\displaystyle \pi (n)}is asymptotic to⁠n/log⁡n{\displaystyle n/\log n}⁠, which is denoted as and means that the ratio ofπ(n){\displaystyle \pi (n)}to the right-hand fractionapproaches1 as⁠n{\displaystyle n}⁠grows to infinity.[81]This implies that the likelihood that a randomly chosen number less than⁠n{\displaystyle n}⁠is prime is (approximately) inversely proportional to the number of digits in⁠n{\displaystyle n}⁠.[82]It also implies that the⁠n{\displaystyle n}⁠th prime number is proportional tonlog⁡n{\displaystyle n\log n}[83]and therefore that the average size of a prime gap is proportional to⁠log⁡n{\displaystyle \log n}⁠.[68]A more accurate estimate forπ(n){\displaystyle \pi (n)}is given by theoffset logarithmic integral[81] Anarithmetic progressionis a finite or infinite sequence of numbers such that consecutive numbers in the sequence all have the same difference.[84]This difference is called themodulusof the progression.[85]For example, is an infinite arithmetic progression with modulus 9. In an arithmetic progression, all the numbers have the same remainder when divided by the modulus; in this example, the remainder is 3. Because both the modulus 9 and the remainder 3 are multiples of 3, so is every element in the sequence. Therefore, this progression contains only one prime number, 3 itself. In general, the infinite progression can have more than one prime only when its remainder⁠a{\displaystyle a}⁠and modulus⁠q{\displaystyle q}⁠are relatively prime. If they are relatively prime,Dirichlet's theorem on arithmetic progressionsasserts that the progression contains infinitely many primes.[86] TheGreen–Tao theoremshows that there are arbitrarily long finite arithmetic progressions consisting only of primes.[36][87] Euler noted that the function yields prime numbers for⁠1≤n≤40{\displaystyle 1\leq n\leq 40}⁠, although composite numbers appear among its later values.[88][89]The search for an explanation for this phenomenon led to the deepalgebraic number theoryofHeegner numbersand theclass number problem.[90]TheHardy–Littlewood conjecture Fpredicts the density of primes among the values ofquadratic polynomialswith integercoefficientsin terms of the logarithmic integral and the polynomial coefficients. No quadratic polynomial has been proven to take infinitely many prime values.[91] TheUlam spiral[92]arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others.[91] One of the most famous unsolved questions in mathematics, dating from 1859, and one of theMillennium Prize Problems, is theRiemann hypothesis, which asks where thezerosof theRiemann zeta functionζ(s){\displaystyle \zeta (s)}are located. This function is ananalytic functionon thecomplex numbers. For complex numbers⁠s{\displaystyle s}⁠with real part greater than one it equals both aninfinite sumover all integers, and aninfinite productover the prime numbers, This equality between a sum and a product, discovered by Euler, is called anEuler product.[93]The Euler product can be derived from the fundamental theorem of arithmetic, and shows the close connection between the zeta function and the prime numbers.[94]It leads to another proof that there are infinitely many primes: if there were only finitely many, then the sum-product equality would also be valid at⁠s=1{\displaystyle s=1}⁠, but the sum would diverge (it is theharmonic series⁠1+12+13+…{\displaystyle 1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\dots }⁠) while the product would be finite, a contradiction.[95] The Riemann hypothesis states that thezerosof the zeta-function are all either negative even numbers, or complex numbers withreal partequal to 1/2.[96]The original proof of theprime number theoremwas based on a weak form of this hypothesis, that there are no zeros with real part equal to 1,[97][98]although other more elementary proofs have been found.[99]The prime-counting function can be expressed byRiemann's explicit formulaas a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term.[100]In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and theasymptotic distributionof primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of⁠x{\displaystyle x}⁠for intervals near a number⁠x{\displaystyle x}⁠).[98] Modular arithmetic modifies usual arithmetic by only using the numbers⁠{0,1,2,…,n−1}{\displaystyle \{0,1,2,\dots ,n-1\}}⁠, for a natural number⁠n{\displaystyle n}⁠called the modulus. Any other natural number can be mapped into this system by replacing it by its remainder after division by⁠n{\displaystyle n}⁠.[101]Modular sums, differences and products are calculated by performing the same replacement by the remainder on the result of the usual sum, difference, or product of integers.[102]Equality of integers corresponds tocongruencein modular arithmetic:⁠x{\displaystyle x}⁠and⁠y{\displaystyle y}⁠are congruent (writtenx≡y{\displaystyle x\equiv y}mod⁠n{\displaystyle n}⁠) when they have the same remainder after division by⁠n{\displaystyle n}⁠.[103]However, in this system of numbers,divisionby all nonzero numbers is possible if and only if the modulus is prime. For instance, with the prime number 7 as modulus, division by 3 is possible:⁠2/3≡3mod7{\displaystyle 2/3\equiv 3{\bmod {7}}}⁠, becauseclearing denominatorsby multiplying both sides by 3 gives the valid formula⁠2≡9mod7{\displaystyle 2\equiv 9{\bmod {7}}}⁠. However, with the composite modulus 6, division by 3 is impossible. There is no valid solution to2/3≡xmod6{\displaystyle 2/3\equiv x{\bmod {6}}}: clearing denominators by multiplying by 3 causes the left-hand side to become 2 while the right-hand side becomes either 0 or 3. In the terminology ofabstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms afieldor, more specifically, afinite field, while other moduli only give aringbut not a field.[104] Several theorems about primes can be formulated using modular arithmetic. For instance,Fermat's little theoremstates that ifa≢0{\displaystyle a\not \equiv 0}(mod⁠p{\displaystyle p}⁠), thenap−1≡1{\displaystyle a^{p-1}\equiv 1}(mod⁠p{\displaystyle p}⁠).[105]Summing this over all choices of⁠a{\displaystyle a}⁠gives the equation valid whenever⁠p{\displaystyle p}⁠is prime.Giuga's conjecturesays that this equation is also a sufficient condition for⁠p{\displaystyle p}⁠to be prime.[106]Wilson's theoremsays that an integerp>1{\displaystyle p>1}is prime if and only if thefactorial(p−1)!{\displaystyle (p-1)!}is congruent to−1{\displaystyle -1}mod⁠p{\displaystyle p}⁠. For a composite number⁠n=r⋅s{\displaystyle n=r\cdot s}⁠this cannot hold, since one of its factors divides bothnand⁠(n−1)!{\displaystyle (n-1)!}⁠, and so(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}is impossible.[107] The⁠p{\displaystyle p}⁠-adic orderνp(n){\displaystyle \nu _{p}(n)}of an integer⁠n{\displaystyle n}⁠is the number of copies of⁠p{\displaystyle p}⁠in the prime factorization of⁠n{\displaystyle n}⁠. The same concept can be extended from integers to rational numbers by defining the⁠p{\displaystyle p}⁠-adic order of a fractionm/n{\displaystyle m/n}to be⁠νp(m)−νp(n){\displaystyle \nu _{p}(m)-\nu _{p}(n)}⁠. The⁠p{\displaystyle p}⁠-adic absolute value|q|p{\displaystyle |q|_{p}}of any rational number⁠q{\displaystyle q}⁠is then defined as⁠|q|p=p−νp(q){\displaystyle \vert q\vert _{p}=p^{-\nu _{p}(q)}}⁠. Multiplying an integer by its⁠p{\displaystyle p}⁠-adic absolute value cancels out the factors of⁠p{\displaystyle p}⁠in its factorization, leaving only the other primes. Just as the distance between two real numbers can be measured by the absolute value of their distance, the distance between two rational numbers can be measured by their⁠p{\displaystyle p}⁠-adic distance, the⁠p{\displaystyle p}⁠-adic absolute value of their difference. For this definition of distance, two numbers are close together (they have a small distance) when their difference is divisible by a high power of⁠p{\displaystyle p}⁠. In the same way that the real numbers can be formed from the rational numbers and their distances, by adding extra limiting values to form acomplete field, the rational numbers with the⁠p{\displaystyle p}⁠-adic distance can be extended to a different complete field, the⁠p{\displaystyle p}⁠-adic numbers.[108][109] This picture of an order, absolute value, and complete field derived from them can be generalized toalgebraic number fieldsand theirvaluations(certain mappings from themultiplicative groupof the field to atotally ordered additive group, also called orders),absolute values(certain multiplicative mappings from the field to the real numbers, also callednorms),[108]and places (extensions tocomplete fieldsin which the given field is adense set, also called completions).[110]The extension from the rational numbers to thereal numbers, for instance, is a place in which the distance between numbers is the usualabsolute valueof their difference. The corresponding mapping to an additive group would be thelogarithmof the absolute value, although this does not meet all the requirements of a valuation. According toOstrowski's theorem, up to a natural notion of equivalence, the real numbers and⁠p{\displaystyle p}⁠-adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers.[108]Thelocal–global principleallows certain problems over the rational numbers to be solved by piecing together solutions from each of their places, again underlining the importance of primes to number theory.[111] Acommutative ringis analgebraic structurewhere addition, subtraction and multiplication are defined. The integers are a ring, and the prime numbers in the integers have been generalized to rings in two different ways,prime elementsandirreducible elements. An element⁠p{\displaystyle p}⁠of a ring⁠R{\displaystyle R}⁠is called prime if it is nonzero, has nomultiplicative inverse(that is, it is not aunit), and satisfies the following requirement: whenever⁠p{\displaystyle p}⁠divides the productxy{\displaystyle xy}of two elements of⁠R{\displaystyle R}⁠, it also divides at least one of⁠x{\displaystyle x}⁠or⁠y{\displaystyle y}⁠. An element is irreducible if it is neither a unit nor the product of two other non-unit elements. In the ring of integers, the prime and irreducible elements form the same set, In an arbitrary ring, all prime elements are irreducible. The converse does not hold in general, but does hold forunique factorization domains.[112] The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. An example of such a domain is theGaussian integers⁠Z[i]{\displaystyle \mathbb {Z} [i]}⁠, the ring ofcomplex numbersof the forma+bi{\displaystyle a+bi}where⁠i{\displaystyle i}⁠denotes theimaginary unitand⁠a{\displaystyle a}⁠and⁠b{\displaystyle b}⁠are arbitrary integers. Its prime elements are known asGaussian primes. Not every number that is prime among the integers remains prime in the Gaussian integers; for instance, the number 2 can be written as a product of the two Gaussian primes1+i{\displaystyle 1+i}and⁠1−i{\displaystyle 1-i}⁠. Rational primes (the prime elements in the integers) congruent to 3 mod 4 are Gaussian primes, but rational primes congruent to 1 mod 4 are not.[113]This is a consequence ofFermat's theorem on sums of two squares, which states that an odd prime⁠p{\displaystyle p}⁠is expressible as the sum of two squares,⁠p=x2+y2{\displaystyle p=x^{2}+y^{2}}⁠, and therefore factorable as⁠p=(x+iy)(x−iy){\displaystyle p=(x+iy)(x-iy)}⁠, exactly when⁠p{\displaystyle p}⁠is 1 mod 4.[114] Not every ring is a unique factorization domain. For instance, in the ring of numbersa+b−5{\displaystyle a+b{\sqrt {-5}}}(for integers⁠a{\displaystyle a}⁠and⁠b{\displaystyle b}⁠) the number21{\displaystyle 21}has two factorizations⁠21=3⋅7=(1+2−5)(1−2−5){\displaystyle 21=3\cdot 7=(1+2{\sqrt {-5}})(1-2{\sqrt {-5}})}⁠, where neither of the four factors can be reduced any further, so it does not have a unique factorization. In order to extend unique factorization to a larger class of rings, the notion of a number can be replaced with that of anideal, a subset of the elements of a ring that contains all sums of pairs of its elements, and all products of its elements with ring elements.Prime ideals, which generalize prime elements in the sense that theprincipal idealgenerated by a prime element is a prime ideal, are an important tool and object of study incommutative algebra,algebraic number theoryandalgebraic geometry. The prime ideals of the ring of integers are the ideals⁠(0){\displaystyle (0)}⁠,⁠(2){\displaystyle (2)}⁠,⁠(3){\displaystyle (3)}⁠,⁠(5){\displaystyle (5)}⁠,⁠(7){\displaystyle (7)}⁠,⁠(11){\displaystyle (11)}⁠, ... The fundamental theorem of arithmetic generalizes to theLasker–Noether theorem, which expresses every ideal in aNoetheriancommutative ringas an intersection ofprimary ideals, which are the appropriate generalizations ofprime powers.[115] Thespectrum of a ringis a geometric space whose points are the prime ideals of the ring.[116]Arithmetic geometryalso benefits from this notion, and many concepts exist in both geometry and number theory. For example, factorization orramificationof prime ideals when lifted to anextension field, a basic problem of algebraic number theory, bears some resemblance withramification in geometry. These concepts can even assist with in number-theoretic questions solely concerned with integers. For example, prime ideals in thering of integersofquadratic number fieldscan be used in provingquadratic reciprocity, a statement that concerns the existence of square roots modulo integer prime numbers.[117]Early attempts to proveFermat's Last Theoremled toKummer's introduction ofregular primes, integer prime numbers connected with the failure of unique factorization in thecyclotomic integers.[118]The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed byChebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case.[119] In the theory offinite groupstheSylow theoremsimply that, if a power of a prime numberpn{\displaystyle p^{n}}divides theorder of a group, then the group has a subgroup of order⁠pn{\displaystyle p^{n}}⁠. ByLagrange's theorem, any group of prime order is acyclic group, and byBurnside's theoremany group whose order is divisible by only two primes issolvable.[120] For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics[b]other than the use of prime numbered gear teeth to distribute wear evenly.[121]In particular, number theorists such asBritishmathematicianG. H. Hardyprided themselves on doing work that had absolutely no military significance.[122] This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation ofpublic-key cryptographyalgorithms.[33]These applications have led to significant study ofalgorithmsfor computing with prime numbers, and in particular ofprimality testing, methods for determining whether a given number is prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. Most primality tests only tell whether their argument is prime or not. Routines that also provide a prime factor of composite arguments (or all of its prime factors) are calledfactorizationalgorithms. Prime numbers are also used in computing forchecksums,hash tables, andpseudorandom number generators. The most basic method of checking the primality of a given integer⁠n{\displaystyle n}⁠is calledtrial division. This method divides⁠n{\displaystyle n}⁠by each integer from 2 up to thesquare rootof⁠n{\displaystyle n}⁠. Any such integer dividing⁠n{\displaystyle n}⁠evenly establishes⁠n{\displaystyle n}⁠as composite; otherwise it is prime. Integers larger than the square root do not need to be checked because, whenever⁠n=a⋅b{\displaystyle n=a\cdot b}⁠, one of the two factors⁠a{\displaystyle a}⁠and⁠b{\displaystyle b}⁠is less than or equal to thesquare rootof⁠n{\displaystyle n}⁠. Another optimization is to check only primes as factors in this range.[123]For instance, to check whether 37 is prime, this method divides it by the primes in the range from 2 to⁠37{\displaystyle {\sqrt {37}}}⁠, which are 2, 3, and 5. Each division produces a nonzero remainder, so 37 is indeed prime. Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performsgrows exponentiallyas a function of the number of digits of these integers.[124]However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter.[125] Before computers,mathematical tableslisting all of the primes or prime factorizations up to a given limit were commonly printed.[126]The oldest known method for generating a list of primes is called the sieve of Eratosthenes.[127]The animation shows an optimized variant of this method.[128]Another more asymptotically efficient sieving method for the same problem is thesieve of Atkin.[129]In advanced mathematics,sieve theoryapplies similar methods to other problems.[130] Some of the fastest modern tests for whether an arbitrary given number⁠n{\displaystyle n}⁠is prime areprobabilistic(orMonte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer.[131]For instance theSolovay–Strassen primality teston a given number⁠p{\displaystyle p}⁠chooses a number⁠a{\displaystyle a}⁠randomly from 2 throughp−2{\displaystyle p-2}and usesmodular exponentiationto check whethera(p−1)/2±1{\displaystyle a^{(p-1)/2}\pm 1}is divisible by⁠p{\displaystyle p}⁠.[c]If so, it answers yes and otherwise it answers no. If⁠p{\displaystyle p}⁠really is prime, it will always answer yes, but if⁠p{\displaystyle p}⁠is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2.[132]If this test is repeated⁠n{\displaystyle n}⁠times on the same number, the probability that a composite number could pass the test every time is at most⁠1/2n{\displaystyle 1/2^{n}}⁠. Because this decreases exponentially with the number of tests, it provides high confidence (although not certainty) that a number that passes the repeated test is prime. On the other hand, if the test ever fails, then the number is certainly composite.[133]A composite number that passes such a test is called apseudoprime.[132] In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. For instance, this is true of trial division. The algorithms with guaranteed-correct output include bothdeterministic(non-random) algorithms, such as theAKS primality test,[134]and randomizedLas Vegas algorithmswhere the random choices made by the algorithm do not affect its final answer, such as some variations ofelliptic curve primality proving.[131]When the elliptic curve method concludes that a number is prime, it providesprimality certificatethat can be verified quickly.[135]The elliptic curve primality test is the fastest in practice of the guaranteed-correct primality tests, but its runtime analysis is based onheuristic argumentsrather than rigorous proofs. TheAKS primality testhas mathematically proven time complexity, but is slower than elliptic curve primality proving in practice.[136]These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; when doing this, a faster probabilistic test can quickly eliminate most composite numbers before a guaranteed-correct algorithm is used to verify that the remaining numbers are prime.[d] The following table lists some of these tests. Their running time is given in terms of⁠n{\displaystyle n}⁠, the number to be tested and, for probabilistic algorithms, the number⁠k{\displaystyle k}⁠of tests performed. Moreover,ε{\displaystyle \varepsilon }is an arbitrarily small positive number, and log is thelogarithmto an unspecified base. Thebig O notationmeans that each time bound should be multiplied by aconstant factorto convert it from dimensionless units to units of time; this factor depends on implementation details such as the type of computer used to run the algorithm, but not on the input parameters⁠n{\displaystyle n}⁠and⁠k{\displaystyle k}⁠. In addition to the aforementioned tests that apply to any natural number, some numbers of a special form can be tested for primality more quickly. For example, theLucas–Lehmer primality testcan determine whether aMersenne number(one less than apower of two) is prime, deterministically, in the same time as a single iteration of the Miller–Rabin test.[141]This is why since 1992 (as of October 2024[update]) thelargestknownprimehas always been a Mersenne prime.[142]It is conjectured that there are infinitely many Mersenne primes.[143] The following table gives the largest known primes of various types. Some of these primes have been found usingdistributed computing. In 2009, theGreat Internet Mersenne Prime Searchproject was awarded a US$100,000 prize for first discovering a prime with at least 10 million digits.[144]TheElectronic Frontier Foundationalso offers $150,000 and $250,000 for primes with at least 100 million digits and 1 billion digits, respectively.[145] Given a composite integer⁠n{\displaystyle n}⁠, the task of providing one (or all) prime factors is referred to asfactorizationof⁠n{\displaystyle n}⁠. It is significantly more difficult than primality testing,[152]and although many factorization algorithms are known, they are slower than the fastest primality testing methods. Trial division andPollard's rho algorithmcan be used to find very small factors of⁠n{\displaystyle n}⁠,[125]andelliptic curve factorizationcan be effective when⁠n{\displaystyle n}⁠has factors of moderate size.[153]Methods suitable for arbitrary large numbers that do not depend on the size of its factors include thequadratic sieveandgeneral number field sieve. As with primality testing, there are also factorization algorithms that require their input to have a special form, including thespecial number field sieve.[154]As of December 2019[update]thelargest number known to have been factoredby a general-purpose algorithm isRSA-240, which has 240 decimal digits (795 bits) and is the product of two large primes.[155] Shor's algorithmcan factor any integer in a polynomial number of steps on aquantum computer.[156]However, current technology can only run this algorithm for very small numbers. As of October 2012[update], the largest number that has been factored by a quantum computer running Shor's algorithm is 21.[157] Severalpublic-key cryptographyalgorithms, such asRSAand theDiffie–Hellman key exchange, are based on large prime numbers (2048-bitprimes are common).[158]RSA relies on the assumption that it is much easier (that is, more efficient) to perform the multiplication of two (large) numbers⁠x{\displaystyle x}⁠and⁠y{\displaystyle y}⁠than to calculate⁠x{\displaystyle x}⁠and⁠y{\displaystyle y}⁠(assumedcoprime) if only the productxy{\displaystyle xy}is known.[33]The Diffie–Hellman key exchange relies on the fact that there are efficient algorithms formodular exponentiation(computing⁠abmodc{\displaystyle a^{b}{\bmod {c}}}⁠), while the reverse operation (thediscrete logarithm) is thought to be a hard problem.[159] Prime numbers are frequently used forhash tables. For instance the original method of Carter and Wegman foruniversal hashingwas based on computinghash functionsby choosing randomlinear functionsmodulo large prime numbers. Carter and Wegman generalized this method to⁠k{\displaystyle k}⁠-independent hashingby using higher-degree polynomials, again modulo large primes.[160]As well as in the hash function, prime numbers are used for the hash table size inquadratic probingbased hash tables to ensure that the probe sequence covers the whole table.[161] Somechecksummethods are based on the mathematics of prime numbers. For instance the checksums used inInternational Standard Book Numbersare defined by taking the rest of the number modulo 11, a prime number. Because 11 is prime this method can detect both single-digit errors and transpositions of adjacent digits.[162]Another checksum method,Adler-32, uses arithmetic modulo 65521, the largest prime number less than⁠216{\displaystyle 2^{16}}⁠.[163]Prime numbers are also used inpseudorandom number generatorsincludinglinear congruential generators[164]and theMersenne Twister.[165] Prime numbers are of central importance to number theory but also have many applications to other areas within mathematics, includingabstract algebraand elementary geometry. For example, it is possible to place prime numbers of points in a two-dimensional grid so thatno three are in a line, or so that every triangle formed by three of the pointshas large area.[166]Another example isEisenstein's criterion, a test for whether apolynomial is irreduciblebased on divisibility of its coefficients by a prime number and its square.[167] The concept of a prime number is so important that it has been generalized in different ways in various branches of mathematics. Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. For example, theprime fieldof a given field is its smallest subfield that contains both 0 and 1. It is either the field of rational numbers or afinite fieldwith a prime number of elements, whence the name.[168]Often a second, additional meaning is intended by using the word prime, namely that any object can be, essentially uniquely, decomposed into its prime components. For example, inknot theory, aprime knotis aknotthat is indecomposable in the sense that it cannot be written as theconnected sumof two nontrivial knots. Any knot can be uniquely expressed as a connected sum of prime knots.[169]Theprime decomposition of 3-manifoldsis another example of this type.[170] Beyond mathematics and computing, prime numbers have potential connections toquantum mechanics, and have been used metaphorically in the arts and literature. They have also been used inevolutionary biologyto explain the life cycles ofcicadas. Fermat primesare primes of the form with⁠k{\displaystyle k}⁠anonnegative integer.[171]They are named afterPierre de Fermat, who conjectured that all such numbers are prime. The first five of these numbers – 3, 5, 17, 257, and 65,537 – are prime,[172]butF5{\displaystyle F_{5}}is composite and so are all other Fermat numbers that have been verified as of 2017.[173]Aregular⁠n{\displaystyle n}⁠-gonisconstructible using straightedge and compassif and only if the odd prime factors of⁠n{\displaystyle n}⁠(if any) are distinct Fermat primes.[172]Likewise, a regular⁠n{\displaystyle n}⁠-gon may be constructed using straightedge, compass, and anangle trisectorif and only if the prime factors of⁠n{\displaystyle n}⁠are any number of copies of 2 or 3 together with a (possibly empty) set of distinctPierpont primes, primes of the form⁠2a3b+1{\displaystyle 2^{a}3^{b}+1}⁠.[174] It is possible to partition any convex polygon into⁠n{\displaystyle n}⁠smaller convex polygons of equal area and equal perimeter, when⁠n{\displaystyle n}⁠is apower of a prime number, but this is not known for other values of⁠n{\displaystyle n}⁠.[175] Beginning with the work ofHugh MontgomeryandFreeman Dysonin the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels ofquantum systems.[176][177]Prime numbers are also significant inquantum information science, thanks to mathematical structures such asmutually unbiased basesandsymmetric informationally complete positive-operator-valued measures.[178][179] The evolutionary strategy used bycicadasof the genusMagicicadamakes use of prime numbers.[180]These insects spend most of their lives asgrubsunderground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles.[181][182]In contrast, the multi-year periods between flowering inbambooplants are hypothesized to besmooth numbers, having only small prime numbers in their factorizations.[183] Prime numbers have influenced many artists and writers. The FrenchcomposerOlivier Messiaenused prime numbers to create ametrical music through "natural phenomena". In works such asLa Nativité du Seigneur(1935) andQuatre études de rythme(1949–1950), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations".[184] In his science fiction novelContact, scientistCarl Sagansuggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomerFrank Drakein 1975.[185]In the novelThe Curious Incident of the Dog in the Night-TimebyMark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen withAsperger syndrome.[186]Prime numbers are used as a metaphor for loneliness and isolation in thePaolo GiordanonovelThe Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers.[187]
https://en.wikipedia.org/wiki/Prime_number
TheHuman Connectome Project(HCP) was a five-year project (later extended to 10 years) sponsored by sixteen components of theNational Institutes of Health, split between two consortia of research institutions. The project was launched in July 2009[1]as the first of three Grand Challenges of the NIH's Blueprint for Neuroscience Research.[2]On September 15, 2010, the NIH announced that it would award two grants: $30 million over five years to a consortium led byWashington University in St. Louisand theUniversity of Minnesota, with strong contributions fromUniversity of Oxford(FMRIB) and $8.5 million over three years to a consortium led byHarvard University,Massachusetts General Hospitaland theUniversity of California Los Angeles.[3] The goal of the Human Connectome Project was to build a "network map" (connectome) that sheds light on the anatomical and functional connectivity within the healthyhuman brain, as well as to produce a body of data that will facilitate research intobrain disorderssuch asdyslexia,autism,Alzheimer's disease, andschizophrenia.[4][5] A number of successor projects are currently in progress, based on the Human Connectome Project results.[6] The WU-Minn-Oxford consortium developed improved MRI instrumentation, image acquisition and image analysis methods for mapping the connectivity in the human brain at spatial resolutions significantly better than previously available; using these methods, WU-Minn-Oxford consortium collected a large amount of MRI and behavioral data on 1,200 healthy adults — twin pairs and their siblings from 300 families - using a special 3 Tesla MRI instrument. In addition, it scanned 184 subjects from this pool at 7 Tesla, with higher spatial resolution. The data were analyzed to show the anatomical and functional connections between parts of the brain for each individual, and were related to behavioral test data. Comparing theconnectomesand genetic data of geneticallyidentical twinswith fraternal twins revealed the relative contributions of genes and environment in shaping brain circuitry and pinpointed relevantgenetic variation. The maps also shed light on how brain networks are organized. Using a combination ofnon-invasiveimaging technologies, includingresting-state fMRIand task-basedfunctional MRI,MEGandEEG, anddiffusion MRI, the WU-Minn mappedconnectomesat the macro scale —mapping large brain systemsthat were divided into anatomically and functionally distinct areas, rather than mapping individualneurons. Dozens of investigators and researchers from nine institutions contributed to this project. Research institutions include: Washington University in St. Louis, the Center for Magnetic Resonance Research at theUniversity of Minnesota,University of Oxford,Saint Louis University,Indiana University,D'Annunzio University of Chieti–Pescara,Ernst Strungmann Institute,Warwick University, Advanced MRI Technologies, and theUniversity of California at Berkeley.[7] The data that resulted from this research is publicly available in an open-source web-accessible neuroinformatics platform.[8][9] The MGH/Harvard-UCLA consortium focussed on optimizing MRI technology for imaging the brain's structural connections usingdiffusion MRI, with a goal of increasingspatial resolution, quality, and speed. Diffusion MRI, employed in both projects, maps the brain's fibrous long-distance connections by tracking the motion of water. Waterdiffusionpatterns in different types of cells allow the detection of different types of tissues. Using this imaging method, the long extensions of neurons, calledwhite matter, can be seen in sharp relief.[10][11] The new scanner built at the MGHMartinos Centerfor this project was "4 to 8 times as powerful as conventional systems, enabling imaging of humanneuroanatomywith greater sensitivity than was previously possible."[3]The scanner has a maximum gradient strength of 300 mT/m and aslew rateof 200T/m/s, with b-values tested up to 20,000 s/mm^2. For comparison, a standard gradient coil is 45 mT/m.[12][13][14] To understand the relationship between brain connectivity and behavior better, the Human Connectome Project used a reliable and well-validated battery of measures that assess a wide range of human functions. The core of its battery is the tools and methods developed by theNIH Toolboxfor Assessment of Neurological and Behavioral function.[15] The Human Connectome Project has grown into a large group of research teams. These teams make use of the style of brain scanning developed by the Project.[16]The studies usually include using large groups of participants, scanning many angles of participants' brains, and carefully documenting the location of the structures in each participant's brain.[17]Studies affiliated with the Human Connectome Project are currently cataloged by the Connectome Coordination Facility. The studies fall into three categories: Healthy Adult Connectomes, Lifespan Connectome Data, and Connectomes Related to Human Disease. Under each of these categories are research groups working on specific questions. The Human Connectome Project Young Adult study[18]made data on the brain connections of 1100 healthy young adults available to the scientific community.[19]Scientists have used data from the study to support theories about which areas of the brain communicate with one another.[20]For example, one study used data from the project to show that theamygdala, a part of the brain essential for emotional processing, is connected to the parts of the brain that receive information from the senses and plan movement.[21]Another study showed that healthy individuals who had a high tendency to experience anxious or depressed mood had fewer connections between the amygdala and a number of brain areas related to attention. There are currently four research groups collecting data on connections in the brains of populations other than young adults. The purpose of these groups is to determine ordinary brain connectivity during infancy, childhood, adolescence, and aging. Scientists will use the data from these research groups in the same manner in which they have used data from the Human Connectome Project Young Adult study.[22] Fourteen research groups investigate how connections in the brain change during the course of a particular disease. Four of the groups focus onAlzheimer's diseaseordementia. Alzheimer's disease and dementia are diseases that begin during aging. Memory loss and cognitive impairment mark the progression of these diseases. While scientists consider Alzheimer's disease to be a disease with a specific cause, dementia actually describes symptoms which could be attributed to a number of causes. Two other research groups investigate how diseases that disrupt vision change connectivity in the brain. Another four of the research groups focus onanxiety disordersandmajor depressive disorder, psychological disorders that result in abnormal emotional regulation. Two more of the research groups focus on the effects ofpsychosis, a symptom of some psychological disorders in which an individual perceives reality differently than others do. One of the teams researchesepilepsy, a disease characterized by seizures. Finally, one research team is documenting the brain connections of theAmishpeople, a religious and ethnic group that has high rates of somepsychological disorders.[23] Although theories have been put forth about the way brain connections change in the diseases under investigation, many of these theories have been supported by data from healthy populations.[21]For example, an analysis of the brains of healthy individuals supported the theory that individuals with anxiety disorders and depression have less connectivity between their emotional centers and the areas that govern attention. By collecting data specifically from individuals with these diseases, researchers hope to have a more certain idea of how brain connections in these individuals change over time. The project was completed in 2021.[24]and a retrospective analysis is available.[25]A number of new projects have started based on the results.[6]
https://en.wikipedia.org/wiki/Human_Connectome_Project
Asmartbookwas a class ofmobile devicethat combined certain features of both asmartphoneandnetbookcomputer, produced between 2009 and 2010.[1]Smartbooks were advertised with features such asalways on, all-day battery life,3G, orWi-Ficonnectivity andGPS(all typically found in smartphones) in alaptopor tablet-style body with a screen size of 5 to 10 inches and a physical or softtouchscreenkeyboard.[2] A German company soldlaptopsunder the brandSmartbookand held atrademarkfor the word in many countries (not including some big markets likeUnited States,China,Japan, orIndia). It acted to preempt others from using the termsmartbookto describe their products.[3][4] Smartbooks tended to be designed more for entertainment purposes than for productivity and typically targeted to work with online applications.[5]They were projected to be sold subsidized throughmobile network operators, likemobile phones, along with a wireless data plan.[6] The advent of much more populartabletslikeAndroidtablets and theiPad, coupled with the prevailing popularity of conventionaldesktop computersandlaptopshave displaced the smartbook.[7] The smartbook concept was mentioned byQualcommin May 2009 during marketing for itsSnapdragontechnology, with products expected later that year.[8]Difficulties in adapting key software (in particular, Adobe's proprietaryFlash Player) to the ARM architecture[9]delayed releases until the first quarter of 2010.[10] Smartbooks would have been powered by processors which were more energy-efficient than traditional ones typically found in desktop and laptop computers.[1]The first smartbooks were expected to use variants of theLinuxoperating system, such as Google'sAndroidorChromeOS. TheARMprocessor would have allowed them to achieve longer battery life than many larger devices usingx86processors.[8][9]In February 2010,ABI Researchprojected that 163 million smartbooks would ship in 2015.[11] In many countries the wordSmartbookwas a trademark registered by Smartbook AG.[12][13]In August 2009 a German court ruled Qualcomm must block access from Germany to all its webpages containing the wordSmartbookunless Smartbook AG is mentioned.[14]Smartbook AG defended its trademark.[4][15]A February 2010 ruling prevented Lenovo from using the term.[16] By the end of 2010, Qualcomm CEOPaul Jacobsadmitted thattablet computerssuch as theiPadalready occupied the niche of the smartbook, so the name was dropped.[7]In February 2011 Qualcomm won its legal battle when the German patent office ruled the words "smart" and "book" could be used.[17]However, several trademarks have been registered.[18][19][20][21] In March 2009 the Always Innovating company announced theTouch Book.[22]It was based on theTexas InstrumentsOMAP 3530which implemented theARM Cortex-A8architecture. It was originally developed from the Texas InstrumentsBeagle Board. It had a touchscreen and a detachable keyboard which contained a second battery. The device came with a Linux operating system and the company offered to license their hardware designs.[22][23][24] Sharp Electronics, introduced their PC-Z1 "Netwalker" device in August 2009 with a promised ship date of October 2009. It featured a 5.5" touchscreen, runs Ubuntu on anARM Cortex-A8basedFreescale i.MX515and was packaged in a small clamshell design. Sharp reported the device weighs less than 500 grams and will run 10 hours on one battery charge. The device is said to run 720p video, and have both 2D and 3D graphics acceleration. It comes withAdobe Flash Lite3.1 installed.[25] Pegatron, anAsuscompany, showed a working prototype of a smartbook in August 2009. It consisted of an ARM Cortex-A8 basedFreescale i.MX515supports 2D/3D graphics as well as720pHD video, 512 MB DDR2RAM, 1024x600 8.9" LCD screen, Bluetooth 2.0, 802.11g and run off aSD card. It also featured one USB and one micro USB port, a VGA port as well as a card reader. The smartbook ranUbuntuNetbook 9.04 and contained a version ofAdobe Flash Playerwhich was out of date. Thebill of materialsfor the Pegatron smartbook prototype was $120.[26] In November 2009 Pegatron said it had received a large number of orders for smartbooks that would launch in early 2010. The devices were rumored to sell for about $200 when subsidized.Asusannounced plans to release their own smartbook in the first quarter of 2010.[27] Qualcommwas expected to announce a smartbook on November 12, 2009, at an analyst meeting.[28]ALenovodevice concept was shown, and announced in January 2010. In May 2010 the Skylight was cancelled.[29] In late January 2010 a U.S.Federal Communications Commission(FCC) listing featured a device fromHPthat was referred assmartbook, while a prototype of the same device was already shown earlier. In beginning February on Mobile World Congress in Barcelona, HP announced it will bring this device to market. The specifications will most likely be following:[30][31][32][33] In the end of March 2010 the smartbook made an appearance at FCC again, this time listing its3Gcapabilities. According to FCC, the device will support GSM 850 and 1900, as well as WCDMA II and V bands. These WCDMA bands may indicate the usage in AT&T network in the United States.[34][35]Details of the product is now available on the HP website.[36][37] In June 2010, a smartbook device fromToshibawas announced. It featuresNvidia Tegraprocessor and is able to remain instand-by modefor up to 7 days.[38][39]The device was officially available at the ToshibaUnited Kingdomsite.[40]Originally delivered with Android v2.1 (upgradable to v2.2 since 2011[41]) it can also be modified to run a customizedLinuxdistribution. In Japan, was sold as "Dynabook AZ". TheGenesicompany announced an MX Smartbook as part of theirEfikaline in August 2010.[42]It was originally priced atUS$349, and some reviewers questioned if it was small enough to fit this definition.[43][44]It is ostensibly a derivative of the above-mentioned Pegatron design. In September 2009,Foxconnannounced it is working on smartbook development.[45]In November 2009, aQuanta Computerpre-production Snapdragon powered sample smartbook device that ranAndroidwas unveiled.[46][47]Companies likeAcer Inc.planned to release a smartbook, but due to the popularity of tablets,MacBook AirandUltrabooks, plans were scrapped.[48]
https://en.wikipedia.org/wiki/Smartbook
Since the 1930sEnglishhas created numerousportmanteau wordsusing the word English as the second element. These refer to varieties of English that are heavily influenced by other languages or that are typical of speakers from a certain country or region. The term can mean a type of English heavily influenced by another language (typically the speaker'sL1) inaccent,lexis,syntax, etc., or to the practice ofcode-switchingbetween languages. In some cases, the word refers to the use of theLatin alphabetto write languages that use a different script, especially common on computer platforms that only allow Latin input such as online chat, social networks, emails and SMS. The practice of forming new words in this way has become increasingly popular since the 1990s. One scholarly article lists 510 such terms, known as "lishes", some of which are sourced from user-generated wikis.[1] The following is a list of lishes that have Wikipedia pages.
https://en.wikipedia.org/wiki/List_of_lishes
Infunctional analysisand related areas ofmathematics,locally convex topological vector spaces(LCTVS) orlocally convex spacesare examples oftopological vector spaces(TVS) that generalizenormed spaces. They can be defined astopologicalvector spaces whose topology isgeneratedby translations ofbalanced,absorbent,convex sets. Alternatively they can be defined as avector spacewith afamilyofseminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarilynormable, the existence of a convexlocal basefor thezero vectoris strong enough for theHahn–Banach theoremto hold, yielding a sufficiently rich theory of continuouslinear functionals. Fréchet spacesare locally convex topological vector spaces that arecompletely metrizable(with a choice of complete metric). They are generalizations ofBanach spaces, which are complete vector spaces with respect to a metric generated by anorm. Metrizable topologies on vector spaces have been studied since their introduction inMaurice Fréchet's1902 PhD thesisSur quelques points du calcul fonctionnel(wherein the notion of ametricwas first introduced). After the notion of a general topological space was defined byFelix Hausdorffin 1914,[1]although locally convex topologies were implicitly used by some mathematicians, up to 1934 onlyJohn von Neumannwould seem to have explicitly defined theweak topologyon Hilbert spaces andstrong operator topologyon operators on Hilbert spaces.[2][3]Finally, in 1935 von Neumann introduced the general definition of a locally convex space (called aconvex spaceby him).[4][5] A notable example of a result which had to wait for the development and dissemination of general locally convex spaces (amongst other notions and results, likenets, theproduct topologyandTychonoff's theorem) to be proven in its full generality, is theBanach–Alaoglu theoremwhichStefan Banachfirst established in 1932 by an elementarydiagonal argumentfor the case of separable normed spaces[6](in which case theunit ball of the dual is metrizable). SupposeX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}asubfieldof thecomplex numbers(normallyC{\displaystyle \mathbb {C} }itself orR{\displaystyle \mathbb {R} }). A locally convex space is defined either in terms of convex sets, or equivalently in terms of seminorms. Atopological vector space(TVS) is calledlocally convexif it has aneighborhood basis(that is, a local base) at the origin consisting of balanced,convex sets.[7]The termlocally convex topological vector spaceis sometimes shortened tolocally convex spaceorLCTVS. A subsetC{\displaystyle C}inX{\displaystyle X}is called In fact, every locally convex TVS has a neighborhood basis of the origin consisting ofabsolutely convexsets (that is, disks), where this neighborhood basis can further be chosen to also consist entirely of open sets or entirely of closed sets.[8]Every TVS has a neighborhood basis at the origin consisting of balanced sets, but only a locally convex TVS has a neighborhood basis at the origin consisting of sets that are both balancedandconvex. It is possible for a TVS to havesomeneighborhoods of the origin that are convex and yet not be locally convex because it has no neighborhood basis at the origin consisting entirely of convex sets (that is, every neighborhood basis at the origin contains some non-convex set); for example, every non-locally convex TVSX{\displaystyle X}has itself (that is,X{\displaystyle X}) as a convex neighborhood of the origin. Because translation is continuous (by definition oftopological vector space), all translations arehomeomorphisms, so every base for the neighborhoods of the origin can be translated to a base for the neighborhoods of any given vector. AseminormonX{\displaystyle X}is a mapp:X→R{\displaystyle p:X\to \mathbb {R} }such that Ifp{\displaystyle p}satisfies positive definiteness, which states that ifp(x)=0{\displaystyle p(x)=0}thenx=0,{\displaystyle x=0,}thenp{\displaystyle p}is anorm. While in general seminorms need not be norms, there is an analogue of this criterion for families of seminorms, separatedness, defined below. IfX{\displaystyle X}is a vector space andP{\displaystyle {\mathcal {P}}}is a family of seminorms onX{\displaystyle X}then a subsetQ{\displaystyle {\mathcal {Q}}}ofP{\displaystyle {\mathcal {P}}}is called abase of seminormsforP{\displaystyle {\mathcal {P}}}if for allp∈P{\displaystyle p\in {\mathcal {P}}}there exists aq∈Q{\displaystyle q\in {\mathcal {Q}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9] Definition(second version): Alocally convex spaceis defined to be a vector spaceX{\displaystyle X}along with afamilyP{\displaystyle {\mathcal {P}}}of seminorms onX.{\displaystyle X.} Suppose thatX{\displaystyle X}is a vector space overK,{\displaystyle \mathbb {K} ,}whereK{\displaystyle \mathbb {K} }is either the real or complex numbers. A family of seminormsP{\displaystyle {\mathcal {P}}}on the vector spaceX{\displaystyle X}induces a canonical vector space topology onX{\displaystyle X}, called theinitial topologyinduced by the seminorms, making it into atopological vector space(TVS). By definition, it is thecoarsesttopology onX{\displaystyle X}for which all maps inP{\displaystyle {\mathcal {P}}}are continuous. It is possible for a locally convex topology on a spaceX{\displaystyle X}to be induced by a family of norms but forX{\displaystyle X}tonotbenormable(that is, to have its topology be induced by a single norm). An open set inR≥0{\displaystyle \mathbb {R} _{\geq 0}}has the form[0,r){\displaystyle [0,r)}, wherer{\displaystyle r}is a positive real number. The family ofpreimagesp−1([0,r))={x∈X:p(x)<r}{\displaystyle p^{-1}\left([0,r)\right)=\{x\in X:p(x)<r\}}asp{\displaystyle p}ranges over a family of seminormsP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over the positive real numbers is asubbasis at the originfor the topology induced byP{\displaystyle {\mathcal {P}}}. These sets are convex, as follows from properties 2 and 3 of seminorms. Intersections of finitely many such sets are then also convex, and since the collection of all such finite intersections is abasis at the originit follows that the topology is locally convex in the sense of thefirstdefinition given above. Recall that the topology of a TVS is translation invariant, meaning that ifS{\displaystyle S}is any subset ofX{\displaystyle X}containing the origin then for anyx∈X,{\displaystyle x\in X,}S{\displaystyle S}is a neighborhood of the origin if and only ifx+S{\displaystyle x+S}is a neighborhood ofx{\displaystyle x}; thus it suffices to define the topology at the origin. A base of neighborhoods ofy{\displaystyle y}for this topology is obtained in the following way: for every finite subsetF{\displaystyle F}ofP{\displaystyle {\mathcal {P}}}and everyr>0,{\displaystyle r>0,}letUF,r(y):={x∈X:p(x−y)<rfor allp∈F}.{\displaystyle U_{F,r}(y):=\{x\in X:p(x-y)<r\ {\text{ for all }}p\in F\}.} IfX{\displaystyle X}is a locally convex space and ifP{\displaystyle {\mathcal {P}}}is a collection of continuous seminorms onX{\displaystyle X}, thenP{\displaystyle {\mathcal {P}}}is called abase of continuous seminormsif it is a base of seminorms for the collection ofallcontinuous seminorms onX{\displaystyle X}.[9]Explicitly, this means that for all continuous seminormsp{\displaystyle p}onX{\displaystyle X}, there exists aq∈P{\displaystyle q\in {\mathcal {P}}}and a realr>0{\displaystyle r>0}such thatp≤rq.{\displaystyle p\leq rq.}[9]IfP{\displaystyle {\mathcal {P}}}is a base of continuous seminorms for a locally convex TVSX{\displaystyle X}then the family of all sets of the form{x∈X:q(x)<r}{\displaystyle \{x\in X:q(x)<r\}}asq{\displaystyle q}varies overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}varies over the positive real numbers, is abaseof neighborhoods of the origin inX{\displaystyle X}(not just a subbasis, so there is no need to take finite intersections of such sets).[9][proof 1] A familyP{\displaystyle {\mathcal {P}}}of seminorms on a vector spaceX{\displaystyle X}is calledsaturatedif for anyp{\displaystyle p}andq{\displaystyle q}inP,{\displaystyle {\mathcal {P}},}the seminorm defined byx↦max{p(x),q(x)}{\displaystyle x\mapsto \max\{p(x),q(x)\}}belongs toP.{\displaystyle {\mathcal {P}}.} IfP{\displaystyle {\mathcal {P}}}is a saturated family of continuous seminorms that induces the topology onX{\displaystyle X}then the collection of all sets of the form{x∈X:p(x)<r}{\displaystyle \{x\in X:p(x)<r\}}asp{\displaystyle p}ranges overP{\displaystyle {\mathcal {P}}}andr{\displaystyle r}ranges over all positive real numbers, forms a neighborhood basis at the origin consisting of convex open sets;[9]This forms a basis at the origin rather than merely a subbasis so that in particular, there isnoneed to take finite intersections of such sets.[9] The following theorem implies that ifX{\displaystyle X}is a locally convex space then the topology ofX{\displaystyle X}can be a defined by a family of continuousnormsonX{\displaystyle X}(anormis aseminorms{\displaystyle s}wheres(x)=0{\displaystyle s(x)=0}impliesx=0{\displaystyle x=0}) if and only if there existsat least onecontinuousnormonX{\displaystyle X}.[10]This is because the sum of a norm and a seminorm is a norm so if a locally convex space is defined by some familyP{\displaystyle {\mathcal {P}}}of seminorms (each of which is necessarily continuous) then the familyP+n:={p+n:p∈P}{\displaystyle {\mathcal {P}}+n:=\{p+n:p\in {\mathcal {P}}\}}of (also continuous) norms obtained by adding some given continuous normn{\displaystyle n}to each element, will necessarily be a family of norms that defines this same locally convex topology. If there exists a continuous norm on a topological vector spaceX{\displaystyle X}thenX{\displaystyle X}is necessarily Hausdorff but the converse is not in general true (not even for locally convex spaces orFréchet spaces). Theorem[11]—LetX{\displaystyle X}be a Fréchet space over the fieldK.{\displaystyle \mathbb {K} .}Then the following are equivalent: Suppose that the topology of a locally convex spaceX{\displaystyle X}is induced by a familyP{\displaystyle {\mathcal {P}}}of continuous seminorms onX{\displaystyle X}. Ifx∈X{\displaystyle x\in X}and ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is anetinX{\displaystyle X}, thenx∙→x{\displaystyle x_{\bullet }\to x}inX{\displaystyle X}if and only if for allp∈P,{\displaystyle p\in {\mathcal {P}},}p(x∙−x)=(p(xi)−x)i∈I→0.{\displaystyle p\left(x_{\bullet }-x\right)=\left(p\left(x_{i}\right)-x\right)_{i\in I}\to 0.}[12]Moreover, ifx∙{\displaystyle x_{\bullet }}is Cauchy inX{\displaystyle X}, then so isp(x∙)=(p(xi))i∈I{\displaystyle p\left(x_{\bullet }\right)=\left(p\left(x_{i}\right)\right)_{i\in I}}for everyp∈P.{\displaystyle p\in {\mathcal {P}}.}[12] Although the definition in terms of a neighborhood base gives a better geometric picture, the definition in terms of seminorms is easier to work with in practice. The equivalence of the two definitions follows from a construction known as theMinkowski functionalor Minkowski gauge. The key feature of seminorms which ensures the convexity of theirε{\displaystyle \varepsilon }-ballsis thetriangle inequality. For an absorbing setC{\displaystyle C}such that ifx∈C,{\displaystyle x\in C,}thentx∈C{\displaystyle tx\in C}whenever0≤t≤1,{\displaystyle 0\leq t\leq 1,}define the Minkowski functional ofC{\displaystyle C}to beμC(x)=inf{r>0:x∈rC}.{\displaystyle \mu _{C}(x)=\inf\{r>0:x\in rC\}.} From this definition it follows thatμC{\displaystyle \mu _{C}}is a seminorm ifC{\displaystyle C}is balanced and convex (it is also absorbent by assumption). Conversely, given a family of seminorms, the sets{x:pα1(x)<ε1,…,pαn(x)<εn}{\displaystyle \left\{x:p_{\alpha _{1}}(x)<\varepsilon _{1},\ldots ,p_{\alpha _{n}}(x)<\varepsilon _{n}\right\}}form a base of convex absorbent balanced sets. Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letB{\displaystyle {\mathcal {B}}}be afilter baseof subsets ofX{\displaystyle X}such that: ThenB{\displaystyle {\mathcal {B}}}is aneighborhood baseat 0 for a locally convex TVS topology onX.{\displaystyle X.} Theorem[7]—Suppose thatX{\displaystyle X}is a (real or complex) vector space and letL{\displaystyle {\mathcal {L}}}be a non-empty collection of convex,balanced, andabsorbingsubsets ofX.{\displaystyle X.}Then the set of all positive scalar multiples of finite intersections of sets inL{\displaystyle {\mathcal {L}}}forms a neighborhood base at the origin for a locally convex TVS topology onX.{\displaystyle X.} Example: auxiliary normed spaces IfW{\displaystyle W}isconvexandabsorbinginX{\displaystyle X}then thesymmetric setD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will be convex andbalanced(also known as anabsolutely convex setor adisk) in addition to being absorbing inX.{\displaystyle X.}This guarantees that theMinkowski functionalpD:X→R{\displaystyle p_{D}:X\to \mathbb {R} }ofD{\displaystyle D}will be aseminormonX,{\displaystyle X,}thereby making(X,pD){\displaystyle \left(X,p_{D}\right)}into aseminormed spacethat carries its canonicalpseudometrizabletopology. The set of scalar multiplesrD{\displaystyle rD}asr{\displaystyle r}ranges over{12,13,14,…}{\displaystyle \left\{{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\ldots \right\}}(or over any other set of non-zero scalars having0{\displaystyle 0}as a limit point) forms a neighborhood basis of absorbingdisksat the origin for this locally convex topology. IfX{\displaystyle X}is atopological vector spaceand if this convex absorbing subsetW{\displaystyle W}is also abounded subsetofX,{\displaystyle X,}then the absorbing diskD:=⋂|u|=1uW{\displaystyle D:=\bigcap _{|u|=1}uW}will also be bounded, in which casepD{\displaystyle p_{D}}will be anormand(X,pD){\displaystyle \left(X,p_{D}\right)}will form what is known as anauxiliary normed space. If this normed space is aBanach spacethenD{\displaystyle D}is called aBanach disk. LetX{\displaystyle X}be a TVS. Say that a vector subspaceM{\displaystyle M}ofX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX{\displaystyle X}.[13]Say thatX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[13] TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP. For completemetrizable TVSsthere is a converse: Theorem[13](Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex. If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[13] Throughout,P{\displaystyle {\mathcal {P}}}is a family of continuous seminorms that generate the topology ofX.{\displaystyle X.} Topological closure IfS⊆X{\displaystyle S\subseteq X}andx∈X,{\displaystyle x\in X,}thenx∈cl⁡S{\displaystyle x\in \operatorname {cl} S}if and only if for everyr>0{\displaystyle r>0}and every finite collectionp1,…,pn∈P{\displaystyle p_{1},\ldots ,p_{n}\in {\mathcal {P}}}there exists somes∈S{\displaystyle s\in S}such that∑i=1npi(x−s)<r.{\displaystyle \sum _{i=1}^{n}p_{i}(x-s)<r.}[14]The closure of{0}{\displaystyle \{0\}}inX{\displaystyle X}is equal to⋂p∈Pp−1(0).{\displaystyle \bigcap _{p\in {\mathcal {P}}}p^{-1}(0).}[15] Topology of Hausdorff locally convex spaces Every Hausdorff locally convex space ishomeomorphicto a vector subspace of a product ofBanach spaces.[16]TheAnderson–Kadec theoremstates that every infinite–dimensionalseparableFréchet spaceishomeomorphicto theproduct space∏i∈NR{\textstyle \prod _{i\in \mathbb {N} }\mathbb {R} }of countably many copies ofR{\displaystyle \mathbb {R} }(this homeomorphism need not be alinear map).[17] Algebraic properties of convex subsets A subsetC{\displaystyle C}is convex if and only iftC+(1−t)C⊆C{\displaystyle tC+(1-t)C\subseteq C}for all0≤t≤1{\displaystyle 0\leq t\leq 1}[18]or equivalently, if and only if(s+t)C=sC+tC{\displaystyle (s+t)C=sC+tC}for all positive reals>0andt>0,{\displaystyle s>0{\text{ and }}t>0,}[19]where because(s+t)C⊆sC+tC{\displaystyle (s+t)C\subseteq sC+tC}always holds, theequals sign={\displaystyle \,=\,}can be replaced with⊇.{\displaystyle \,\supseteq .\,}IfC{\displaystyle C}is a convex set that contains the origin thenC{\displaystyle C}isstar shapedat the origin and for all non-negative reals≥0andt≥0,{\displaystyle s\geq 0{\text{ and }}t\geq 0,}(sC)∩(tC)=(min{s,t})C.{\displaystyle (sC)\cap (tC)=(\min _{}\{s,t\})C.} TheMinkowski sumof two convex sets is convex; furthermore, the scalar multiple of a convex set is again convex.[20] Topological properties of convex subsets For any subsetS{\displaystyle S}of a TVSX,{\displaystyle X,}theconvex hull(respectively,closed convex hull,balanced hull,convex balanced hull) ofS,{\displaystyle S,}denoted byco⁡S{\displaystyle \operatorname {co} S}(respectively,co¯S,{\displaystyle {\overline {\operatorname {co} }}S,}bal⁡S,{\displaystyle \operatorname {bal} S,}cobal⁡S{\displaystyle \operatorname {cobal} S}), is the smallest convex (respectively, closed convex, balanced, convex balanced) subset ofX{\displaystyle X}containingS.{\displaystyle S.} Any vector spaceX{\displaystyle X}endowed with thetrivial topology(also called theindiscrete topology) is a locally convex TVS (and of course, it is the coarsest such topology). This topology is Hausdorff if and onlyX={0}.{\displaystyle X=\{0\}.}The indiscrete topology makes any vector space into acompletepseudometrizablelocally convex TVS. In contrast, thediscrete topologyforms a vector topology onX{\displaystyle X}if and onlyX={0}.{\displaystyle X=\{0\}.}This follows from the fact that everytopological vector spaceis aconnected space. IfX{\displaystyle X}is a real or complex vector space and ifP{\displaystyle {\mathcal {P}}}is the set of all seminorms onX{\displaystyle X}then the locally convex TVS topology, denoted byτlc,{\displaystyle \tau _{\operatorname {lc} },}thatP{\displaystyle {\mathcal {P}}}induces onX{\displaystyle X}is called thefinest locally convex topologyonX.{\displaystyle X.}[37]This topology may also be described as the TVS-topology onX{\displaystyle X}having as a neighborhood base at the origin the set of allabsorbingdisksinX.{\displaystyle X.}[37]Any locally convex TVS-topology onX{\displaystyle X}is necessarily a subset ofτlc.{\displaystyle \tau _{\operatorname {lc} }.}(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}isHausdorff.[15]Every linear map from(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}into another locally convex TVS is necessarily continuous.[15]In particular, every linear functional on(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is continuous and every vector subspace ofX{\displaystyle X}is closed in(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)};[15]therefore, ifX{\displaystyle X}is infinite dimensional then(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is not pseudometrizable (and thus not metrizable).[37]Moreover,τlc{\displaystyle \tau _{\operatorname {lc} }}is theonlyHausdorff locally convex topology onX{\displaystyle X}with the property that any linear map from it into any Hausdorff locally convex space is continuous.[38]The space(X,τlc){\displaystyle \left(X,\tau _{\operatorname {lc} }\right)}is abornological space.[39] Every normed space is a Hausdorff locally convex space, and much of the theory of locally convex spaces generalizes parts of the theory of normed spaces. The family of seminorms can be taken to be the single norm. Every Banach space is a complete Hausdorff locally convex space, in particular, theLp{\displaystyle L^{p}}spaceswithp≥1{\displaystyle p\geq 1}are locally convex. More generally, every Fréchet space is locally convex. A Fréchet space can be defined as a complete locally convex space with a separated countable family of seminorms. The spaceRω{\displaystyle \mathbb {R} ^{\omega }}ofreal valued sequenceswith the family of seminorms given bypi({xn}n)=|xi|,i∈N{\displaystyle p_{i}\left(\left\{x_{n}\right\}_{n}\right)=\left|x_{i}\right|,\qquad i\in \mathbb {N} }is locally convex. The countable family of seminorms is complete and separable, so this is a Fréchet space, which is not normable. This is also thelimit topologyof the spacesRn,{\displaystyle \mathbb {R} ^{n},}embedded inRω{\displaystyle \mathbb {R} ^{\omega }}in the natural way, by completing finite sequences with infinitely many0.{\displaystyle 0.} Given any vector spaceX{\displaystyle X}and a collectionF{\displaystyle F}of linear functionals on it,X{\displaystyle X}can be made into a locally convex topological vector space by giving it the weakest topology making all linear functionals inF{\displaystyle F}continuous. This is known as theweak topologyor theinitial topologydetermined byF.{\displaystyle F.}The collectionF{\displaystyle F}may be thealgebraic dualofX{\displaystyle X}or any other collection. The family of seminorms in this case is given bypf(x)=|f(x)|{\displaystyle p_{f}(x)=|f(x)|}for allf{\displaystyle f}inF.{\displaystyle F.} Spaces of differentiable functions give other non-normable examples. Consider the space ofsmooth functionsf:Rn→C{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {C} }such thatsupx|xaDbf|<∞,{\displaystyle \sup _{x}\left|x^{a}D_{b}f\right|<\infty ,}wherea{\displaystyle a}andB{\displaystyle B}aremultiindices. The family of seminorms defined bypa,b(f)=supx|xaDbf(x)|{\displaystyle p_{a,b}(f)=\sup _{x}\left|x^{a}D_{b}f(x)\right|}is separated, and countable, and the space is complete, so this metrizable space is a Fréchet space. It is known as theSchwartz space, or the space of functions of rapid decrease, and itsdual spaceis the space oftempered distributions. An importantfunction spacein functional analysis is the spaceD(U){\displaystyle D(U)}of smooth functions withcompact supportinU⊆Rn.{\displaystyle U\subseteq \mathbb {R} ^{n}.}A more detailed construction is needed for the topology of this space because the spaceC0∞(U){\displaystyle C_{0}^{\infty }(U)}is not complete in the uniform norm. The topology onD(U){\displaystyle D(U)}is defined as follows: for any fixedcompact setK⊆U,{\displaystyle K\subseteq U,}the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}of functionsf∈C0∞{\displaystyle f\in C_{0}^{\infty }}withsupp⁡(f)⊆K{\displaystyle \operatorname {supp} (f)\subseteq K}is aFréchet spacewith countable family of seminorms‖f‖m=supk≤msupx|Dkf(x)|{\displaystyle \|f\|_{m}=\sup _{k\leq m}\sup _{x}\left|D^{k}f(x)\right|}(these are actually norms, and the completion of the spaceC0∞(K){\displaystyle C_{0}^{\infty }(K)}with the‖⋅‖m{\displaystyle \|\cdot \|_{m}}norm is a Banach spaceDm(K){\displaystyle D^{m}(K)}). Given any collection(Ka)a∈A{\displaystyle \left(K_{a}\right)_{a\in A}}of compact sets, directed by inclusion and such that their union equalU,{\displaystyle U,}theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}form adirect system, andD(U){\displaystyle D(U)}is defined to be the limit of this system. Such a limit of Fréchet spaces is known as anLF space. More concretely,D(U){\displaystyle D(U)}is the union of all theC0∞(Ka){\displaystyle C_{0}^{\infty }\left(K_{a}\right)}with the strongestlocally convextopology which makes eachinclusion mapC0∞(Ka)↪D(U){\displaystyle C_{0}^{\infty }\left(K_{a}\right)\hookrightarrow D(U)}continuous. This space is locally convex and complete. However, it is not metrizable, and so it is not a Fréchet space. The dual space ofD(Rn){\displaystyle D\left(\mathbb {R} ^{n}\right)}is the space ofdistributionsonRn.{\displaystyle \mathbb {R} ^{n}.} More abstractly, given atopological spaceX,{\displaystyle X,}the spaceC(X){\displaystyle C(X)}of continuous (not necessarily bounded) functions onX{\displaystyle X}can be given the topology ofuniform convergenceon compact sets. This topology is defined by semi-normsφK(f)=max{|f(x)|:x∈K}{\displaystyle \varphi _{K}(f)=\max\{|f(x)|:x\in K\}}(asK{\displaystyle K}varies over thedirected setof all compact subsets ofX{\displaystyle X}). WhenX{\displaystyle X}is locally compact (for example, an open set inRn{\displaystyle \mathbb {R} ^{n}}) theStone–Weierstrass theoremapplies—in the case of real-valued functions, any subalgebra ofC(X){\displaystyle C(X)}that separates points and contains the constant functions (for example, the subalgebra of polynomials) isdense. Many topological vector spaces are locally convex. Examples of spaces that lack local convexity include the following: Both examples have the property that any continuous linear map to thereal numbersis0.{\displaystyle 0.}In particular, theirdual spaceis trivial, that is, it contains only the zero functional. Theorem[40]—LetT:X→Y{\displaystyle T:X\to Y}be a linear operator between TVSs whereY{\displaystyle Y}is locally convex (note thatX{\displaystyle X}neednotbe locally convex). ThenT{\displaystyle T}is continuous if and only if for every continuous seminormq{\displaystyle q}onY{\displaystyle Y}, there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such thatq∘T≤p.{\displaystyle q\circ T\leq p.} Because locally convex spaces are topological spaces as well as vector spaces, the natural functions to consider between two locally convex spaces arecontinuous linear maps. Using the seminorms, a necessary and sufficient criterion for thecontinuityof a linear map can be given that closely resembles the more familiarboundedness conditionfound for Banach spaces. Given locally convex spacesX{\displaystyle X}andY{\displaystyle Y}with families of seminorms(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}and(qβ)β{\displaystyle \left(q_{\beta }\right)_{\beta }}respectively, a linear mapT:X→Y{\displaystyle T:X\to Y}is continuous if and only if for everyβ,{\displaystyle \beta ,}there existα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}andM>0{\displaystyle M>0}such that for allv∈X,{\displaystyle v\in X,}qβ(Tv)≤M(pα1(v)+⋯+pαn(v)).{\displaystyle q_{\beta }(Tv)\leq M\left(p_{\alpha _{1}}(v)+\dotsb +p_{\alpha _{n}}(v)\right).} In other words, each seminorm of the range ofT{\displaystyle T}isboundedabove by some finite sum of seminorms in thedomain. If the family(pα)α{\displaystyle \left(p_{\alpha }\right)_{\alpha }}is a directed family, and it can always be chosen to be directed as explained above, then the formula becomes even simpler and more familiar:qβ(Tv)≤Mpα(v).{\displaystyle q_{\beta }(Tv)\leq Mp_{\alpha }(v).} Theclassof all locally convex topological vector spaces forms acategorywith continuous linear maps asmorphisms. Theorem[40]—IfX{\displaystyle X}is a TVS (not necessarily locally convex) and iff{\displaystyle f}is a linear functional onX{\displaystyle X}, thenf{\displaystyle f}is continuous if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|f|≤p.{\displaystyle |f|\leq p.} IfX{\displaystyle X}is a real or complex vector space,f{\displaystyle f}is a linear functional onX{\displaystyle X}, andp{\displaystyle p}is a seminorm onX{\displaystyle X}, then|f|≤p{\displaystyle |f|\leq p}if and only iff≤p.{\displaystyle f\leq p.}[41]Iff{\displaystyle f}is a non-0 linear functional on a real vector spaceX{\displaystyle X}and ifp{\displaystyle p}is a seminorm onX{\displaystyle X}, thenf≤p{\displaystyle f\leq p}if and only iff−1(1)∩{x∈X:p(x)<1}=∅.{\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}[15] Letn≥1{\displaystyle n\geq 1}be an integer,X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}be TVSs (not necessarily locally convex), letY{\displaystyle Y}be a locally convex TVS whose topology is determined by a familyQ{\displaystyle {\mathcal {Q}}}of continuous seminorms, and letM:∏i=1nXi→Y{\displaystyle M:\prod _{i=1}^{n}X_{i}\to Y}be amultilinear operatorthat is linear in each of itsn{\displaystyle n}coordinates. The following are equivalent:
https://en.wikipedia.org/wiki/Locally_convex_topological_vector_space
Mesirah(ormesira,lit.'to hand over') is the action in which one Jew reports the conduct of another Jew to a non-rabbinic authority in a manner and under the circumstances forbidden byrabbiniclaw.[1]This may not necessarily apply to reporting legitimate crimes to responsible authority, but it does apply to turning over a Jew to an abusive authority, or otherwise to a legitimate one who would punish the criminal in ways seen as excessive by the Jewish community. In any case, "excessive" punishment by non-Jews may be permissible if a precept of theTorahhas been violated.[2] The term for an individual who commitsmesirahismoser(Hebrew:מוסר) ormossur.[2]A person who repeatedly violates this law by informing on his fellow Jews is considered subject todin moser(lit.'law of the informer'), which is analogous todin rodefin that both prescribe death for the offender,[1]at least in theory.[3]According to some, in some circumstances the offender may be killed without warning.[1] The source of the ban comes from the Bava Kamma (Hebrew:בבא קמא) section of theBabylonian Talmud.[4][5][6]The law was most likely instigated to ease Jewish life underRomanorPersianrule. This law is discussed in Babylonian Talmud,Maimonides, and inShulchan Aruch.[7]Shulchan Aruch, however, states that Jews should testify against each other in the gentile court in cases where it is obvious that they would be covering up for each other.[8][9] Maimonides states: Whoever adjudicates in a non-Jewish court ... is wicked and it is as though he has reviled, blasphemed and rebelled against the law of Moses.[10] Maimonides further explains: "It is forbidden to hand over a Jew to the heathen, neither his person nor his goods, even if he is wicked and a sinner, even if he causes distress and pain to fellow-Jews. Whoever hands over a Jew to the heathen has no part in the next world. It is permitted to kill amoserwherever he is. It is even permitted to kill him before he has handed over (a fellow Jew)."[11] According toMichael Broyde, there are many different opinions among 20th-century rabbis as to the extent and circumstancesmesirahis still valid in modern times.[1]Chaim Kanievsky, a leading Israeli rabbi andposekin Haredi society ruled that reporting instances of sexual child abuse to the police is consistent withJewish law.[12][13]Hershel Schachterconcurred, stating that abuse cases should be reported in full to the civil authorities.[14] According toThe Times of Israeland aChannel 4investigation, the concept ofmesirahwas used by aHaredi Jewishleader to protect community members investigated forchild molestationfrom police investigation.[15][16] Themesirahdoctrine came under intense public scrutiny in Australia in early 2015 as a result of evidence given to theRoyal Commission into Institutional Responses to Child Sexual Abuserelating to an alleged long-running and systematic cover-up of child sexual abuse and the institutional protection of perpetrators at the exclusive Melbourne boys' schoolYeshiva College. On 28 January 2015 Fairfax Media reported secret tape recordings and emails had been disclosed, which revealed that members of Australia's Orthodox Jewish community who assisted police investigations into alleged child sexual abuse were pressured to remain silent on the matter. Criminal barrister Alex Lewenberg was alleged to have been "disappointed", and to have berated a Jew who had been a victim of a Jewish sex offender and whom he subsequently regarded as amossurfor breaking withmesirahtradition.[17]Lewenberg was subsequently found guilty of professional misconduct.[18] In February 2015 Zephaniah Waks, an adherent of the ultra-Orthodox HasidicChabadsect in Melbourne, Australia, testified in front of the Royal Commission. He stated that following his discovery that one of his sons had been sexually abused by David Kramer, a teacher at their school, Yeshiva College, he confronted the school's principal, Abraham Glick and demanded that Kramer be sacked. Waks told of his shock when he learned a few days later that Kramer was still working at the school. He again confronted Glick, who then claimed that Kramer had admitted his guilt "because he wanted to be caught", but that the school could not dismiss him because, as Glick claimed, Kramer was at risk of self-harm. Waks also told the Commission that despite his anger, he felt constrained from going to the authorities: I thought this was absolutely outrageous, however if I reported this to the police I would be in breach of the Jewish principle ofmesirah. He added that the concept ofmesirahprevented Chabad members from going to secular authorities: At the very least, the breach ofmesirahalmost certainly always leads to shunning and intimidation within the Jewish community and would almost certainly damage marriage prospects of your children.[19] Giving evidence to the Commission on the day before his father,Menachem (Manny) Waks, one of three children from the Waks family who were sexually abused by staff at Yeshiva College, testified that after breakingmesirahby going public about his abuse, he and his family had been ostracised by rabbinical leaders, shunned by his community and subjected to a sustained campaign of abuse, intimidation and threats, which eventually forced Waks to leave Australia with his wife and children. He also testified about how members of the Chabad community had pressured him to abandon his advocacy: I was in fact contacted by several considered community members, and they said to me that the anti-Semites are having a field day with my testimony and my publicity around this issue, and that if I cared about the community, I'd cease doing that straight away. Counsel Assisting the commission then asked Waks how he felt having been accused of being an informer: I am appalled by it obviously, because the concept of 'Mesirah' really, you can become a death target. Taken at its literal meaning, you become potentially a target who is legitimate to be murdered, because you've gone and cooperated with the authorities. Now, I've never felt threatened for my life, but it does highlight the severity in which this concept is held.[20] In December 2017, the Commission's final report included a recommendation to Jewish institutions: All Jewish institutions in Australia should ensure that their complaint handling policies explicitly state that the halachic concepts ofmesirah,moserandlashon harado not apply to the communication and reporting of allegations of child sexual abuse to police and other civil authorities.[21] Rabbinic courts in Israelhave issued writs calling forsocial exclusionof Jews bringing legal issues[which?]to Israel's civil courts.[22] Mesirahhas been cited as one of the main reasons for the gross underreporting ofsexual abuse cases in Brooklyn's Haredi community.[23][24]It has been used to dissuade Jewish auditors from reporting other Jews to theInternal Revenue Servicefor tax fraud.[11][dead link]
https://en.wikipedia.org/wiki/Mesirah
During the 2010s, international media reports revealed new operational details about the Anglophone cryptographic agencies'global surveillance[1]of both foreign and domestic nationals. The reports mostly relate totop secretdocumentsleakedby ex-NSAcontractorEdward Snowden. The documents consist of intelligence files relating to the U.S. and otherFive Eyescountries.[2]In June 2013, the first of Snowden's documents were published, with further selected documents released to various news outlets through the year. These media reports disclosed severalsecret treatiessigned by members of theUKUSA communityin their efforts to implementglobal surveillance. For example,Der Spiegelrevealed how theGermanFederal Intelligence Service(German:Bundesnachrichtendienst; BND) transfers "massive amounts of intercepted data to the NSA",[3]while Swedish Television revealed theNational Defence Radio Establishment(FRA) provided the NSA with data from itscable collection, under a secret agreement signed in 1954 for bilateral cooperation on surveillance.[4]Other security and intelligence agencies involved in the practice ofglobal surveillanceinclude those in Australia (ASD), Britain (GCHQ), Canada (CSE), Denmark (PET), France (DGSE), Germany (BND), Italy (AISE), the Netherlands (AIVD), Norway (NIS), Spain (CNI), Switzerland (NDB), Singapore (SID) as well as Israel (ISNU), which receives raw, unfiltered data of U.S. citizens from the NSA.[5][6][7][8][9][10][11][12] On June 14, 2013, United StatesprosecutorschargedEdward Snowden withespionageandtheft of government property. In late July 2013, he was granted a one-year temporaryasylumby the Russian government,[13]contributing to a deterioration ofRussia–United States relations.[14][15]Toward the end of October 2013, the British Prime MinisterDavid CameronwarnedThe Guardiannot to publishany more leaks, or it will receive aDA-Notice.[16]In November 2013, a criminal investigation of the disclosure was undertaken by Britain'sMetropolitan Police Service.[17]In December 2013,The GuardianeditorAlan Rusbridgersaid: "We have published I think 26 documents so far out of the 58,000 we've seen."[18] The extent to which the media reports responsibly informed the public is disputed. In January 2014, Obama said that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzhave noted that many of the Snowden documents do not concern domestic surveillance.[20]The US & British Defense establishment weigh the strategic harm in the period following the disclosures more heavily than their civic public benefit. In its first assessment of these disclosures,the Pentagonconcluded that Snowden committed the biggest "theft" of U.S. secrets in thehistory of the United States.[21]SirDavid Omand, a former director ofGCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] Snowden obtained the documents while working forBooz Allen Hamilton, one of the largest contractors for defense and intelligence in the United States.[2]The initial simultaneous publication in June 2013 byThe Washington PostandThe Guardian[23]continued throughout 2013. A small portion of the estimated full cache of documents was later published by other media outlets worldwide, most notablyThe New York Times(United States), theCanadian Broadcasting Corporation, theAustralian Broadcasting Corporation,Der Spiegel(Germany),O Globo(Brazil),Le Monde(France),L'espresso(Italy),NRC Handelsblad(the Netherlands),Dagbladet(Norway),El País(Spain), andSveriges Television(Sweden).[24] Barton Gellman, aPulitzer Prize–winning journalist who ledThe Washington Post's coverage of Snowden's disclosures, summarized the leaks as follows: Taken together, the revelations have brought to light aglobal surveillancesystem that cast off many of its historical restraints after theattacks of Sept. 11, 2001. Secret legal authorities empowered the NSA to sweep in the telephone, Internet and location records of whole populations. The disclosure revealed specific details of the NSA's close cooperation with U.S. federal agencies such as theFederal Bureau of Investigation(FBI)[26][27]and theCentral Intelligence Agency(CIA),[28][29]in addition to the agency's previously undisclosed financial payments to numerous commercial partners and telecommunications companies,[30][31][32]as well as its previously undisclosed relationships with international partners such as Britain,[33][34]France,[10][35]Germany,[3][36]and itssecret treatieswith foreign governments that were recently established for sharing intercepted data of each other's citizens.[5][37][38][39]The disclosures were made public over the course of several months since June 2013, by the press in several nations from the trove leaked by the former NSA contractor Edward J. Snowden,[40]who obtained the trove while working forBooz Allen Hamilton.[2] George Brandis, theAttorney-General of Australia, asserted that Snowden's disclosure is the "most serious setback for Western intelligence since theSecond World War."[41] As of December 2013[update], global surveillance programs include: The NSA was also getting data directly fromtelecommunications companiescode-named Artifice (Verizon), Lithium (AT&T), Serenade, SteelKnight, and X. The real identities of the companies behind these code names were not included in theSnowden document dumpbecause they were protected asExceptionally Controlled Informationwhich prevents wide circulation even to those (like Snowden) who otherwise have the necessary security clearance.[64][65] Although the exact size of Snowden's disclosure remains unknown, the following estimates have been put up by various government officials: As a contractor of the NSA, Snowden was granted access to U.S. government documents along withtop secretdocuments of severalalliedgovernments, via the exclusiveFive Eyesnetwork.[68]Snowden claims that he currently does not physically possess any of these documents, having surrendered all copies to journalists he met inHong Kong.[69] According to his lawyer, Snowden has pledged not to release any documents while in Russia, leaving the responsibility for further disclosures solely to journalists.[70]As of 2014, the following news outlets have accessed some of the documents provided by Snowden:Australian Broadcasting Corporation,Canadian Broadcasting Corporation,Channel 4,Der Spiegel,El País,El Mundo,L'espresso,Le Monde,NBC,NRC Handelsblad,Dagbladet,O Globo,South China Morning Post,Süddeutsche Zeitung,Sveriges Television,The Guardian,The New York Times, andThe Washington Post. In the 1970s, NSA analystPerry Fellwock(under the pseudonym "Winslow Peck") revealed the existence of theUKUSA Agreement, which forms the basis of theECHELONnetwork, whose existence was revealed in 1988 byLockheedemployee Margaret Newsham.[71][72]Months before theSeptember 11 attacksand during its aftermath, further details of theglobal surveillanceapparatus were provided by various individuals such as the formerMI5officialDavid Shaylerand the journalistJames Bamford,[73][74]who were followed by: In the aftermath of Snowden's revelations,The Pentagonconcluded that Snowden committed the biggest theft of U.S. secrets in thehistory of the United States.[21]In Australia, the coalition government described the leaks as the most damaging blow dealt toAustralian intelligencein history.[41]SirDavid Omand, a former director of GCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] In April 2012, NSA contractorEdward Snowdenbegan downloading documents.[87]That year, Snowden had made his first contact with journalistGlenn Greenwald, then employed byThe Guardian, and he contacted documentary filmmakerLaura Poitrasin January 2013.[88][89] In May 2013, Snowden went on temporary leave from his position at the NSA, citing the pretext of receiving treatment for hisepilepsy. He traveled fromHawaiitoHong Kongat the end of May.[90][91]Greenwald, Poitras andThe Guardian's defence and intelligence correspondentEwen MacAskillflew to Hong Kong for meeting Snowden. After the U.S.-based editor ofThe Guardian,Janine Gibson, held several meetings in New York City, she decided that Greenwald, Poitras and theGuardian's defence and intelligence correspondent Ewen MacAskill would fly to Hong Kong to meet Snowden. On June 5, in the first media report based on the leaked material,[92]The Guardianexposed atop secretcourt order showing that the NSA had collected phone records from over 120 millionVerizon subscribers.[93]Under the order, the numbers of both parties on a call, as well as the location data, unique identifiers, time of call, and duration of call were handed over to the FBI, which turned over the records to the NSA.[93]According toThe Wall Street Journal, the Verizon order is part of a controversial data program, which seeks to stockpile records on all calls made in the U.S., but does not collect information directly fromT-Mobile USandVerizon Wireless, in part because of their foreign ownership ties.[94] On June 6, 2013, the second media disclosure, the revelation of thePRISM surveillance program(which collects the e-mail, voice, text and video chats of foreigners and an unknown number of Americans from Microsoft, Google, Facebook, Yahoo, Apple and other tech giants),[95][96][97][98]was published simultaneously byThe GuardianandThe Washington Post.[86][99] Der Spiegelrevealed NSA spying on multiple diplomatic missions of theEuropean Unionand theUnited Nations Headquartersin New York.[100][101]During specific episodes within a four-year period, the NSA hacked several Chinese mobile-phone companies,[102]theChinese University of Hong KongandTsinghua Universityin Beijing,[103]and the Asian fiber-optic network operatorPacnet.[104]OnlyAustralia, Canada, New Zealand and the UKare explicitly exempted from NSA attacks, whose main target in the European Union is Germany.[105]A method of bugging encrypted fax machines used at an EU embassy is codenamedDropmire.[106] During the2009 G-20 London summit, the British intelligence agencyGovernment Communications Headquarters(GCHQ) intercepted the communications of foreign diplomats.[107]In addition, GCHQ has been intercepting and storing mass quantities of fiber-optic traffic viaTempora.[108]Two principal components of Tempora are called "Mastering the Internet" (MTI) and "Global Telecoms Exploitation".[109]The data is preserved for three days whilemetadatais kept for thirty days.[110]Data collected by theGCHQunder Tempora is shared with theNational Security Agencyin the United States.[109] From 2001 to 2011, the NSA collected vast amounts of metadata records detailing the email and internet usage of Americans viaStellar Wind,[111]which was later terminated due to operational and resource constraints. It was subsequently replaced by newer surveillance programs such as ShellTrumpet, which "processed its one trillionth metadata record" by the end of December 2012.[112] The NSA follows specific procedures to target non-U.S. persons[113]and to minimize data collection from U.S. persons.[114]These court-approved policies allow the NSA to:[115][116] According toBoundless Informant, over 97 billion pieces of intelligence were collected over a 30-day period ending in March 2013. Out of all 97 billion sets of information, about 3 billiondata setsoriginated from U.S. computer networks[117]and around 500 million metadata records were collected from German networks.[118] In August 2013, it was revealed that theBundesnachrichtendienst(BND) of Germany transfers massive amounts of metadata records to the NSA.[119] Der Spiegeldisclosed that out of all27 member statesof the European Union, Germany is the most targeted due to the NSA's systematic monitoring and storage of Germany's telephone and Internet connection data. According to the magazine the NSA stores data from around half a billion communications connections in Germany each month. This data includes telephone calls, emails, mobile-phone text messages and chat transcripts.[120] The NSA gained massive amounts of information captured from the monitored data traffic in Europe. For example, in December 2012, the NSA gathered on an average day metadata from some 15 million telephone connections and 10 million Internet datasets. The NSA also monitored the European Commission in Brussels and monitored EU diplomatic Facilities in Washington and at the United Nations by placing bugs in offices as well as infiltrating computer networks.[121] The U.S. government made as part of itsUPSTREAM data collection programdeals with companies to ensure that it had access to and hence the capability to surveil undersea fiber-optic cables which deliver e-mails, Web pages, other electronic communications and phone calls from one continent to another at the speed of light.[122][123] According to the Brazilian newspaperO Globo, the NSA spied on millions of emails and calls of Brazilian citizens,[124][125]while Australia and New Zealand have been involved in the joint operation of the NSA's global analytical systemXKeyscore.[126][127]Among the numerousalliedfacilities contributing to XKeyscore are four installations in Australia and one in New Zealand: O Globoreleased an NSA document titled "Primary FORNSAT Collection Operations", which revealed the specific locations and codenames of theFORNSATintercept stations in 2002.[128] According to Edward Snowden, the NSA has established secret intelligence partnerships with manyWestern governments.[127]The Foreign Affairs Directorate (FAD) of the NSA is responsible for these partnerships, which, according to Snowden, are organized such that foreign governments can "insulate their political leaders" from public outrage in the event that theseglobal surveillancepartnerships are leaked.[129] In an interview published byDer Spiegel, Snowden accused the NSA of being "in bed together with the Germans".[130]The NSA granted the German intelligence agenciesBND(foreign intelligence) andBfV(domestic intelligence) access to its controversialXKeyscoresystem.[131]In return, the BND turned over copies of two systems named Mira4 and Veras, reported to exceed the NSA's SIGINT capabilities in certain areas.[3]Every day, massive amounts of metadata records are collected by the BND and transferred to the NSA via theBad Aibling StationnearMunich, Germany.[3]In December 2012 alone, the BND handed over 500 million metadata records to the NSA.[132][133] In a document dated January 2013, the NSA acknowledged the efforts of the BND to undermineprivacy laws: TheBNDhas been working to influence the German government to relax interpretation of the privacy laws to provide greater opportunities of intelligence sharing.[133] According to an NSA document dated April 2013, Germany has now become the NSA's "most prolific partner".[133]Under a section of a separate document leaked by Snowden titled "Success Stories", the NSA acknowledged the efforts of the German government to expand the BND's international data sharing with partners: The German government modifies its interpretation of theG-10 privacy law... to afford the BND more flexibility in sharing protected information with foreign partners.[49] In addition, the German government was well aware of the PRISM surveillance program long before Edward Snowden made details public. According to Angela Merkel's spokesmanSteffen Seibert, there are two separate PRISM programs – one is used by the NSA and the other is used byNATOforces inAfghanistan.[134]The two programs are "not identical".[134] The Guardianrevealed further details of the NSA'sXKeyscoretool, which allows government analysts to search through vast databases containing emails, online chats and the browsing histories of millions of individuals without prior authorization.[135][136][137]Microsoft "developed a surveillance capability to deal" with the interception of encrypted chats onOutlook.com, within five months after the service went into testing. NSA had access to Outlook.com emails because "Prism collects this data prior to encryption."[45] In addition, Microsoft worked with the FBI to enable the NSA to gain access to itscloud storage serviceSkyDrive. An internal NSA document dating from August 3, 2012, described the PRISM surveillance program as a "team sport".[45] TheCIA'sNational Counterterrorism Centeris allowed to examine federal government files for possible criminal behavior, even if there is no reason to suspect U.S. citizens of wrongdoing. Previously the NTC was barred to do so, unless a person was a terror suspect or related to an investigation.[138] Snowden also confirmed thatStuxnetwas cooperatively developed by the United States and Israel.[139]In a report unrelated to Edward Snowden, the French newspaperLe Monderevealed that France'sDGSEwas also undertaking mass surveillance, which it described as "illegal and outside any serious control".[140][141] Documents leaked by Edward Snowden that were seen bySüddeutsche Zeitung(SZ) andNorddeutscher Rundfunkrevealed that severaltelecomoperators have played a key role in helping the British intelligence agencyGovernment Communications Headquarters(GCHQ) tap into worldwidefiber-optic communications. The telecom operators are: Each of them were assigned a particular area of the internationalfiber-optic networkfor which they were individually responsible. The following networks have been infiltrated by GCHQ:TAT-14(EU-UK-US),Atlantic Crossing 1(EU-UK-US),Circe South(France-UK),Circe North(Netherlands-UK),Flag Atlantic-1,Flag Europa-Asia,SEA-ME-WE 3(Southeast Asia-Middle East-Western Europe),SEA-ME-WE 4(Southeast Asia-Middle East-Western Europe), Solas (Ireland-UK), UK-France 3, UK-Netherlands 14,ULYSSES(EU-UK),Yellow(UK-US) andPan European Crossing(EU-UK).[143] Telecommunication companies who participated were "forced" to do so and had "no choice in the matter".[143]Some of the companies were subsequently paid by GCHQ for their participation in the infiltration of the cables.[143]According to the SZ, GCHQ has access to the majority of internet and telephone communications flowing throughout Europe, can listen to phone calls, read emails and text messages, see which websites internet users from all around the world are visiting. It can also retain and analyse nearly the entire European internet traffic.[143] GCHQ is collecting all data transmitted to and from the United Kingdom and Northern Europe via the undersea fibre optic telecommunications cableSEA-ME-WE 3. TheSecurity and Intelligence Division(SID) of Singapore co-operates with Australia in accessing and sharing communications carried by the SEA-ME-WE-3 cable. TheAustralian Signals Directorate(ASD) is also in a partnership with British, American and Singaporean intelligence agencies to tap undersea fibre optic telecommunications cables that link Asia, the Middle East and Europe and carry much of Australia's international phone and internet traffic.[144] The U.S. runs a top-secret surveillance program known as theSpecial Collection Service(SCS), which is based in over 80 U.S. consulates and embassies worldwide.[145][146]The NSA hacked the United Nations' video conferencing system in Summer 2012 in violation of a UN agreement.[145][146] The NSA is not just intercepting the communications of Americans who are in direct contact with foreigners targeted overseas, but also searching the contents of vast amounts of e-mail and text communications into and out of the country by Americans who mention information about foreigners under surveillance.[147]It also spied onAl Jazeeraand gained access to its internal communications systems.[148] The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic.[149][150][151]U.S. Law-enforcement agencies use tools used by computer hackers to gather information on suspects.[152][153]An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional.[154][155][156][157] The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and it must trust the government to report when it improperly spies on Americans.[158]A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011.[159][160][161][162]Under the Corporate Partner Access project, major U.S. telecommunications providers receive hundreds of millions of dollars each year from the NSA.[163]Voluntary cooperation between the NSA and the providers of global communications took off during the 1970s under the cover nameBLARNEY.[163] A letter drafted by the Obama administration specifically to inform Congress of the government's mass collection of Americans' telephone communications data was withheld from lawmakers by leaders of the House Intelligence Committee in the months before a key vote affecting the future of the program.[164][165] The NSA paid GCHQ over £100 Million between 2009 and 2012, in exchange for these funds GCHQ "must pull its weight and be seen to pull its weight." Documents referenced in the article explain that the weaker British laws regarding spying are "a selling point" for the NSA. GCHQ is also developing the technology to "exploit any mobile phone at any time."[166]The NSA has under a legal authority a secret backdoor into its databases gathered from large Internet companies enabling it to search for U.S. citizens' email and phone calls without a warrant.[167][168] ThePrivacy and Civil Liberties Oversight Boardurged the U.S. intelligence chiefs to draft stronger US surveillance guidelines on domestic spying after finding that several of those guidelines have not been updated up to 30 years.[169][170]U.S. intelligence analysts have deliberately broken rules designed to prevent them from spying on Americans by choosing to ignore so-called "minimisation procedures" aimed at protecting privacy[171][172]and used the NSA's agency's enormous eavesdropping power to spy on love interests.[173] After the U.S.Foreign Secret Intelligence Courtruled in October 2011 that some of the NSA's activities were unconstitutional, the agency paid millions of dollars to major internet companies to cover extra costs incurred in their involvement with the PRISM surveillance program.[174] "Mastering the Internet" (MTI) is part of theInterception Modernisation Programme(IMP) of the British government that involves the insertion of thousands of DPI (deep packet inspection) "black boxes" at variousinternet service providers, as revealed by the British media in 2009.[175] In 2013, it was further revealed that the NSA had made a £17.2  million financial contribution to the project, which is capable of vacuuming signals from up to 200 fibre-optic cables at all physical points of entry into Great Britain.[176] TheGuardianandThe New York Timesreported on secret documents leaked by Snowden showing that the NSA has been in "collaboration with technology companies" as part of "an aggressive, multipronged effort" to weaken theencryptionused in commercial software, and GCHQ has a team dedicated to cracking "Hotmail, Google, Yahoo and Facebook" traffic.[183] Germany's domestic security agencyBundesverfassungsschutz(BfV) systematically transfers the personal data of German residents to the NSA, CIA and seven other members of theUnited States Intelligence Community, in exchange for information and espionage software.[184][185][186]Israel, Sweden and Italy are also cooperating with American and British intelligence agencies. Under a secret treaty codenamed "Lustre", French intelligence agencies transferred millions of metadata records to the NSA.[62][63][187][188] The Obama Administration secretly won permission from the Foreign Intelligence Surveillance Court in 2011 to reverse restrictions on the National Security Agency's use of intercepted phone calls and e-mails, permitting the agency to search deliberately for Americans' communications in its massive databases. The searches take place under a surveillance program Congress authorized in 2008 under Section 702 of the Foreign Intelligence Surveillance Act. Under that law, the target must be a foreigner "reasonably believed" to be outside the United States, and the court must approve the targeting procedures in an order good for one year. But a warrant for each target would thus no longer be required. That means that communications with Americans could be picked up without a court first determining that there is probable cause that the people they were talking to were terrorists, spies or "foreign powers." The FISC extended the length of time that the NSA is allowed to retain intercepted U.S. communications from five years to six years with an extension possible for foreign intelligence or counterintelligence purposes. Both measures were done without public debate or any specific authority from Congress.[189] A special branch of the NSA called "Follow the Money" (FTM) monitors international payments, banking and credit card transactions and later stores the collected data in the NSA's own financial databank "Tracfin".[190]The NSA monitored the communications of Brazil's presidentDilma Rousseffand her top aides.[191]The agency also spied on Brazil's oil firmPetrobrasas well as French diplomats, and gained access to the private network of theMinistry of Foreign Affairs of Franceand theSWIFTnetwork.[192] In the United States, the NSA uses the analysis of phone call and e-mail logs of American citizens to create sophisticated graphs of their social connections that can identify their associates, their locations at certain times, their traveling companions and other personal information.[193]The NSA routinely shares raw intelligence data with Israel without first sifting it to remove information about U.S. citizens.[5][194] In an effort codenamed GENIE, computer specialists can control foreign computer networks using "covert implants," a form of remotely transmitted malware on tens of thousands of devices annually.[195][196][197][198]As worldwide sales ofsmartphonesbegan exceeding those offeature phones, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone combines amyriadof data that would interest an intelligence agency, such as social contacts, user behavior, interests, location, photos and credit card numbers and passwords.[199] An internal NSA report from 2010 stated that the spread of the smartphone has been occurring "extremely rapidly"—developments that "certainly complicate traditional target analysis."[199]According to the document, the NSA has set uptask forcesassigned to several smartphone manufacturers andoperating systems, includingApple Inc.'siPhoneandiOSoperating system, as well asGoogle'sAndroidmobile operating system.[199]Similarly, Britain'sGCHQassigned a team to study and crack theBlackBerry.[199] Under the heading "iPhone capability", the document notes that there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of theiOS 3andiOS 4operating systems. These include themappingfeature,voicemailand photos, as well asGoogle Earth, Facebook andYahoo! Messenger.[199] On September 9, 2013, an internal NSA presentation on iPhone Location Services was published byDer Spiegel. One slide shows scenes from Apple's1984-themed television commercialalongside the words "Who knew in 1984..."; another shows Steve Jobs holding an iPhone, with the text "...that this would be big brother..."; and a third shows happy consumers with their iPhones, completing the question with "...and the zombies would be paying customers?"[200] On October 4, 2013,The Washington PostandThe Guardianjointly reported that the NSA and GCHQ had made repeated attempts to spy on anonymous Internet users who have been communicating in secret via the anonymity networkTor. Several of these surveillance operations involved the implantation of malicious code into the computers of Tor users who visit particular websites. The NSA and GCHQ had partly succeeded in blocking access to the anonymous network, diverting Tor users to insecure channels. The government agencies were also able to uncover the identity of some anonymous Internet users.[201][202][203][204] TheCommunications Security Establishment(CSE) has been using a program called Olympia to map the communications of Brazil'sMines and Energy Ministryby targeting the metadata of phone calls and emails to and from the ministry.[205][206] The Australian Federal Government knew about the PRISM surveillance program months before Edward Snowden made details public.[207][208] The NSA gathered hundreds of millions of contact lists from personal e-mail and instant messaging accounts around the world. The agency did not target individuals. Instead it collected contact lists in large numbers that amount to a sizable fraction of the world's e-mail and instant messaging accounts. Analysis of that data enables the agency to search for hidden connections and to map relationships within a much smaller universe of foreign intelligence targets.[209][210][211][212] The NSA monitored the public email account of former Ex-Mexican presidentFelipe Calderón(thus gaining access to the communications of high-ranking cabinet members), the emails of several high-ranking members of Mexico's security forces and text and the mobile phone communication of Ex-Mexican presidentEnrique Peña Nieto.[213][214]The NSA tries to gather cellular and landline phone numbers—often obtained from American diplomats—for as many foreign officials as possible. The contents of the phone calls are stored in computer databases that can regularly be searched using keywords.[215][216] The NSA has been monitoring telephone conversations of 35 world leaders.[217]The U.S. government's first public acknowledgment that it tapped the phones of world leaders was reported on October 28, 2013, by the Wall Street Journal after an internal U.S. government review turned up NSA monitoring of some 35 world leaders.[218]GCHQhas tried to keep its mass surveillance program a secret because it feared a "damaging public debate" on the scale of its activities which could lead to legal challenges against them.[219] The Guardianrevealed that the NSA had been monitoring telephone conversations of 35 world leaders after being given the numbers by an official in another U.S. government department. A confidential memo revealed that the NSA encouraged senior officials in such Departments as theWhite House,StateandThe Pentagon, to share their "Rolodexes" so the agency could add the telephone numbers of leading foreign politicians to their surveillance systems. Reacting to the news, German leaderAngela Merkel, arriving inBrusselsfor anEU summit, accused the U.S. of a breach of trust, saying: "We need to have trust in our allies and partners, and this must now be established once again. I repeat that spying among friends is not at all acceptable against anyone, and that goes for every citizen in Germany."[217]The NSA collected in 2010 data on ordinary Americans' cellphone locations, but later discontinued it because it had no "operational value."[220] Under Britain'sMUSCULARprogramme, the NSA and GCHQ have secretly broken into the main communications links that connectYahooandGoogledata centersaround the world and thereby gained the ability to collect metadata andcontentat will from hundreds of millions of user accounts.[221][222][223][224] The mobile phone of German ChancellorAngela Merkelmight have been tapped by U.S. intelligence.[225][226][227][228]According to the Spiegel this monitoring goes back to 2002[229][230]and ended in the summer of 2013,[218]whileThe New York Timesreported that Germany has evidence that the NSA's surveillance of Merkel began duringGeorge W. Bush's tenure.[231]After learning fromDer Spiegelmagazine that the NSA has been listening in to her personal mobile phone, Merkel compared the snooping practices of the NSA with those of theStasi.[232]It was reported in March 2014, byDer Spiegelthat Merkel had also been placed on an NSA surveillance list alongside 122 other world leaders.[233] On October 31, 2013,Hans-Christian Ströbele, a member of theGerman Bundestagwho visited Snowden in Russia, reported on Snowden's willingness to provide details of the NSA's espionage program.[234] A highly sensitive signals intelligence collection program known asStateroominvolves the interception of radio, telecommunications and internet traffic. It is operated out of the diplomatic missions of theFive Eyes(Australia, Britain, Canada, New Zealand, United States) in numerous locations around the world. The program conducted at U.S. diplomatic missions is run in concert by the U.S. intelligence agencies NSA and CIA in a joint venture group called "Special Collection Service" (SCS), whose members work undercover in shielded areas of the American Embassies and Consulates, where they are officially accredited as diplomats and as such enjoy special privileges. Under diplomatic protection, they are able to look and listen unhindered. The SCS for example used the American Embassy near the Brandenburg Gate in Berlin to monitor communications in Germany's government district with its parliament and the seat of the government.[228][235][236][237] Under the Stateroom surveillance programme, Australia operates clandestine surveillance facilities to intercept phone calls and data across much of Asia.[236][238] In France, the NSA targeted people belonging to the worlds of business, politics or French state administration. The NSA monitored and recorded the content of telephone communications and the history of the connections of each target i.e. the metadata.[239][240]The actual surveillance operation was performed by French intelligence agencies on behalf of the NSA.[62][241]The cooperation between France and the NSA was confirmed by the Director of the NSA,Keith B. Alexander, who asserted that foreign intelligence services collected phone records in "war zones" and "other areas outside their borders" and provided them to the NSA.[242] The French newspaperLe Mondealso disclosed newPRISMand Upstream slides (See Page 4, 7 and 8) coming from the "PRISM/US-984XN Overview" presentation.[243] In Spain, the NSA intercepted the telephone conversations, text messages and emails of millions of Spaniards, and spied on members of the Spanish government.[244]Between December 10, 2012, and January 8, 2013, the NSA collected metadata on 60 million telephone calls in Spain.[245] According to documents leaked by Snowden, the surveillance of Spanish citizens was jointly conducted by the NSA and the intelligence agencies of Spain.[246][247] The New York Timesreported that the NSA carries out an eavesdropping effort, dubbed Operation Dreadnought, against the Iranian leaderAyatollah Ali Khamenei. During his 2009 visit toIranian Kurdistan, the agency collaborated with GCHQ and the U.S.'sNational Geospatial-Intelligence Agency, collecting radio transmissions between aircraft and airports, examining Khamenei's convoy with satellite imagery, and enumerating military radar stations. According to the story, an objective of the operation is "communications fingerprinting": the ability to distinguish Khamenei's communications from those of other people inIran.[248] The same story revealed an operation code-named Ironavenger, in which the NSA intercepted e-mails sent between a country allied with the United States and the government of "an adversary". The ally was conducting aspear-phishingattack: its e-mails containedmalware. The NSA gathered documents andlogincredentials belonging to the enemy country, along with knowledge of the ally's capabilities forattacking computers.[248] According to the British newspaperThe Independent, the British intelligence agency GCHQ maintains a listening post on the roof of theBritish Embassy in Berlinthat is capable of intercepting mobile phone calls, wi-fi data and long-distance communications all over the German capital, including adjacent government buildings such as theReichstag(seat of the German parliament) and theChancellery(seat of Germany's head of government) clustered around theBrandenburg Gate.[249] Operating under the code-name "Quantum Insert", GCHQ set up a fake website masquerading asLinkedIn, a social website used forprofessional networking, as part of its efforts to install surveillance software on the computers of the telecommunications operatorBelgacom.[250][251][252]In addition, the headquarters of the oil cartelOPECwere infiltrated by GCHQ as well as the NSA, which bugged the computers of nine OPEC employees and monitored theGeneral Secretary of OPEC.[250] For more than three years GCHQ has been using an automated monitoring system code-named "Royal Concierge" to infiltrate thereservation systemsof at least 350 prestigious hotels in many different parts of the world in order to target, search and analyze reservations to detect diplomats and government officials.[253]First tested in 2010, the aim of the "Royal Concierge" is to track down the travel plans of diplomats, and it is often supplemented with surveillance methods related tohuman intelligence(HUMINT). Other covert operations include the wiretapping of room telephones and fax machines used in targeted hotels as well as the monitoring of computers hooked up to the hotel network.[253] In November 2013, theAustralian Broadcasting CorporationandThe Guardianrevealed that theAustralian Signals Directorate(DSD) had attempted to listen to the private phone calls of thepresident of Indonesiaand his wife. The Indonesian foreign minister,Marty Natalegawa, confirmed that he and the president had contacted the ambassador in Canberra. Natalegawa said any tapping of Indonesian politicians' personal phones "violates every single decent and legal instrument I can think of—national in Indonesia, national in Australia, international as well".[254] Other high-ranking Indonesian politicians targeted by the DSD include: Carrying the title "3Gimpact and update", a classified presentation leaked by Snowden revealed the attempts of the ASD/DSD to keep up to pace with the rollout of 3G technology in Indonesia and across Southeast Asia. The ASD/DSD motto placed at the bottom of each page reads: "Reveal their secrets—protect our own."[255] Under a secret deal approved by British intelligence officials, the NSA has been storing and analyzing the internet and email records of British citizens since 2007. The NSA also proposed in 2005 a procedure for spying on the citizens of the UK and otherFive-Eyes nations alliance, even where the partner government has explicitly denied the U.S. permission to do so. Under the proposal, partner countries must neither be informed about this particular type of surveillance, nor the procedure of doing so.[37] Toward the end of November,The New York Timesreleased an internal NSA report outlining the agency's efforts to expand its surveillance abilities.[256]The five-page document asserts that thelaw of the United Stateshas not kept up with the needs of the NSA to conduct mass surveillance in the "golden age" ofsignals intelligence, but there are grounds for optimism because, in the NSA's own words: The culture of compliance, which has allowed the American people to entrust NSA with extraordinary authorities, will not be compromised in the face of so many demands, even as we aggressively pursue legal authorities...[257] The report, titled "SIGINTStrategy 2012–2016", also said that the U.S. will try to influence the "global commercial encryption market" through "commercial relationships", and emphasized the need to "revolutionize" the analysis of its vast data collection to "radically increase operational impact".[256] On November 23, 2013, the Dutch newspaperNRC Handelsbladreported that the Netherlands was targeted by U.S. intelligence agencies in the immediate aftermath ofWorld War II. This period of surveillance lasted from 1946 to 1968, and also included the interception of the communications of other European countries including Belgium, France, West Germany and Norway.[258]The Dutch Newspaper also reported that NSA infected more than 50,000 computer networks worldwide, often covertly, with malicious spy software, sometimes in cooperation with local authorities, designed to steal sensitive information.[40][259] According to the classified documents leaked by Snowden, theAustralian Signals Directorate(ASD), formerly known as the Defense Signals Directorate, had offered to share intelligence information it had collected with the other intelligence agencies of theUKUSA Agreement. Data shared with foreign countries include "bulk, unselected, unminimized metadata" it had collected. The ASD provided such information on the condition that no Australian citizens were targeted. At the time the ASD assessed that "unintentional collection [of metadata of Australian nationals] is not viewed as a significant issue". If a target was later identified as being an Australian national, the ASD was required to be contacted to ensure that a warrant could be sought. Consideration was given as to whether "medical, legal or religious information" would be automatically treated differently to other types of data, however a decision was made that each agency would make such determinations on a case-by-case basis.[260]Leaked material does not specify where the ASD had collected the intelligence information from, however Section 7(a) of the Intelligence Services Act 2001 (Commonwealth) states that the ASD's role is "...to obtain intelligence about the capabilities, intentions or activities of people or organizations outside Australia...".[261]As such, it is possible ASD's metadata intelligence holdings was focused on foreign intelligence collection and was within the bounds of Australian law. The Washington Postrevealed that the NSA has been tracking the locations of mobile phones from all over the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. In the process of doing so, the NSA collects more than five billion records of phone locations on a daily basis. This enables NSA analysts to map cellphone owners' relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths.[262][263][264][265] The Washington Post also reported that both GCHQ and the NSA make use of location data and advertising tracking files generated through normal internet browsing (withcookiesoperated by Google, known as "Pref") to pinpoint targets for government hacking and to bolster surveillance.[266][267][268] TheNorwegian Intelligence Service(NIS), which cooperates with the NSA, has gained access to Russian targets in theKola Peninsulaand other civilian targets. In general, the NIS provides information to the NSA about "Politicians", "Energy" and "Armament".[269]Atop secretmemo of the NSA lists the following years as milestones of the Norway–United States of America SIGINT agreement, orNORUS Agreement: The NSA considers the NIS to be one of its most reliable partners. Both agencies also cooperate to crack the encryption systems of mutual targets. According to the NSA, Norway has made no objections to its requests from the NIS.[270] On December 5,Sveriges Televisionreported theNational Defense Radio Establishment(FRA) has been conducting a clandestine surveillance operation in Sweden, targeting the internal politics of Russia. The operation was conducted on behalf of the NSA, receiving data handed over to it by the FRA.[271][272]The Swedish-American surveillance operation also targeted Russian energy interests as well as theBaltic states.[273]As part of theUKUSA Agreement, a secret treaty was signed in 1954 by Sweden with the United States, the United Kingdom, Canada, Australia and New Zealand, regarding collaboration and intelligence sharing.[274] As a result of Snowden's disclosures, the notion ofSwedish neutralityin international politics was called into question.[275]In an internal document dating from the year 2006, the NSA acknowledged that its "relationship" with Sweden is "protected at the TOP SECRET level because of that nation'spolitical neutrality."[276]Specific details of Sweden's cooperation with members of the UKUSA Agreement include: According to documents leaked by Snowden, theSpecial Source Operationsof the NSA has been sharing information containing "logins, cookies, and GooglePREFID" with theTailored Access Operationsdivision of the NSA, as well as Britain's GCHQ agency.[284] During the2010 G-20 Toronto summit, theU.S. embassy in Ottawawas transformed into a security command post during a six-day spying operation that was conducted by the NSA and closely coordinated with theCommunications Security Establishment Canada(CSEC). The goal of the spying operation was, among others, to obtain information on international development, banking reform, and to counter trade protectionism to support "U.S. policy goals."[285]On behalf of the NSA, the CSEC has set up covert spying posts in 20 countries around the world.[8] In Italy theSpecial Collection Serviceof the NSA maintains two separate surveillance posts in Rome andMilan.[286]According to a secret NSA memo dated September 2010, theItalian embassy in Washington, D.C.has been targeted by two spy operations of the NSA: Due to concerns that terrorist or criminal networks may be secretly communicating via computer games, the NSA, GCHQ, CIA, and FBI have been conducting surveillance and scooping up data from the networks of many online games, includingmassively multiplayer online role-playing games(MMORPGs) such asWorld of Warcraft, as well asvirtual worldssuch asSecond Life, and theXboxgaming console.[287][288][289][290] The NSA has cracked the most commonly used cellphone encryption technology,A5/1. According to a classified document leaked by Snowden, the agency can "process encrypted A5/1" even when it has not acquired an encryption key.[291]In addition, the NSA uses various types of cellphone infrastructure, such as the links between carrier networks, to determine the location of a cellphone user tracked byVisitor Location Registers.[292] US district court judge for the District of Columbia, Richard Leon,declared[293][294][295][296]on December 16, 2013, that the mass collection of metadata of Americans' telephone records by the National Security Agency probably violates thefourth amendmentprohibition of unreasonablesearches and seizures.[297]Leon granted the request for a preliminary injunction that blocks the collection of phone data for two private plaintiffs (Larry Klayman, a conservative lawyer, and Charles Strange, father of a cryptologist killed in Afghanistan when his helicopter was shot down in 2011)[298]and ordered the government to destroy any of their records that have been gathered. But the judge stayed action on his ruling pending a government appeal, recognizing in his 68-page opinion the "significant national security interests at stake in this case and the novelty of the constitutional issues."[297] However federal judge William H. Pauley III in New York Cityruled[299]the U.S. government's global telephone data-gathering system is needed to thwart potential terrorist attacks, and that it can only work if everyone's calls are swept in. U.S. District Judge Pauley also ruled that Congress legally set up the program and that it does not violate anyone's constitutional rights. The judge also concluded that the telephone data being swept up by NSA did not belong to telephone users, but to the telephone companies. He further ruled that when NSA obtains such data from the telephone companies, and then probes into it to find links between callers and potential terrorists, this further use of the data was not even a search under the Fourth Amendment. He also concluded that the controlling precedent isSmith v. Maryland: "Smith's bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties," Judge Pauley wrote.[300][301][302][303]The American Civil Liberties Union declared on January 2, 2012, that it will appeal Judge Pauley's ruling that NSA bulk the phone record collection is legal. "The government has a legitimate interest in tracking the associations of suspected terrorists, but tracking those associations does not require the government to subject every citizen to permanent surveillance," deputy ACLU legal director Jameel Jaffer said in a statement.[304] In recent years, American and British intelligence agencies conducted surveillance on more than 1,100 targets, including the office of an Israeli prime minister, heads of international aid organizations, foreign energy companies and a European Union official involved in antitrust battles with American technology businesses.[305] Acatalog of high-tech gadgets and software developed by the NSA'sTailored Access Operations(TAO) was leaked by the German news magazineDer Spiegel.[306]Dating from 2008, the catalog revealed the existence of special gadgets modified to capture computerscreenshotsandUSB flash drivessecretly fitted with radio transmitters to broadcast stolen data over the airwaves, and fake base stations intended to intercept mobile phone signals, as well as many other secret devices and software implants listed here: The Tailored Access Operations (TAO) division of the NSA intercepted the shipping deliveries of computers and laptops in order to install spyware and physical implants on electronic gadgets. This was done in close cooperation with the FBI and the CIA.[306][307][308][309]NSA officials responded to the Spiegel reports with a statement, which said: "Tailored Access Operations is a unique national asset that is on the front lines of enabling NSA to defend the nation and its allies. [TAO's] work is centred on computer network exploitation in support of foreign intelligence collection."[310] In a separate disclosure unrelated to Snowden, the FrenchTrésor public, which runs acertificate authority, was found to have issued fake certificates impersonatingGooglein order to facilitate spying on French government employees viaman-in-the-middle attacks.[311] The NSA is working to build a powerfulquantum computercapable of breaking all types of encryption.[314][315][316][317][318]The effort is part of a US$79.7 million research program known as "Penetrating Hard Targets". It involves extensive research carried out in large, shielded rooms known asFaraday cages, which are designed to preventelectromagnetic radiationfrom entering or leaving.[315]Currently, the NSA is close to producing basic building blocks that will allow the agency to gain "complete quantum control on twosemiconductorqubits".[315]Once a quantum computer is successfully built, it would enable the NSA to unlock the encryption that protects data held by banks, credit card companies, retailers, brokerages, governments and health care providers.[314] According toThe New York Times, the NSA is monitoring approximately 100,000 computers worldwide with spy software named Quantum. Quantum enables the NSA to conduct surveillance on those computers on the one hand, and can also create a digital highway for launching cyberattacks on the other hand. Among the targets are the Chinese and Russian military, but also trade institutions within the European Union. The NYT also reported that the NSA can access and alter computers which are not connected with the internet by a secret technology in use by the NSA since 2008. The prerequisite is the physical insertion of the radio frequency hardware by a spy, a manufacturer or an unwitting user. The technology relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers. In some cases, they are sent to a briefcase-size relay station that intelligence agencies can set up miles away from the target. The technology can also transmit malware back to the infected computer.[40] Channel 4andThe Guardianrevealed the existence ofDishfire, a massivedatabaseof the NSA that collects hundreds of millions of text messages on a daily basis.[319]GCHQ has been given full access to the database, which it uses to obtain personal information of Britons by exploiting a legal loophole.[320] Each day, the database receives and stores the following amounts of data: The database is supplemented with an analytical tool known as the Prefer program, which processes SMS messages to extract other types of information including contacts frommissed callalerts.[321] ThePrivacy and Civil Liberties Oversight Board report on mass surveillancewas released on January 23, 2014. It recommends to end the bulk telephone metadata, i.e., bulk phone records – phone numbers dialed, call times and durations, but not call content collection – collection program, to create a "Special Advocate" to be involved in some cases before the FISA court judge and to release future and past FISC decisions "that involve novel interpretations of FISA or other significant questions of law, technology or compliance."[322][323][324] According to a joint disclosure byThe New York Times,The Guardian, andProPublica,[325][326][327][328]the NSA and GCHQ have begun working together to collect and store data from dozens ofsmartphoneapplication softwareby 2007 at the latest. A 2008 GCHQ report, leaked by Snowden asserts that "anyone usingGoogle Mapson a smartphone is working in support of a GCHQ system". The NSA and GCHQ have traded recipes for various purposes such as grabbing location data and journey plans that are made when a target usesGoogle Maps, and vacuuming upaddress books,buddy lists,phone logsand geographic data embedded in photos posted on the mobile versions of numerous social networks such as Facebook,Flickr,LinkedIn, Twitter, and other services. In a separate 20-page report dated 2012, GCHQ cited the popular smartphone game "Angry Birds" as an example of how an application could be used to extract user data. Taken together, such forms of data collection would allow the agencies to collect vital information about a user's life, including his or her home country, current location (throughgeolocation), age, gender,ZIP code,marital status, income,ethnicity,sexual orientation, education level, number of children, etc.[329][330] A GCHQ document dated August 2012 provided details of theSqueaky Dolphinsurveillance program, which enables GCHQ to conduct broad,real-timemonitoring of varioussocial mediafeatures and social media traffic such as YouTube video views, theLike buttonon Facebook, andBlogspot/Bloggervisits without the knowledge or consent of the companies providing those social media features. The agency's "Squeaky Dolphin" program can collect, analyze and utilize YouTube, Facebook and Blogger data in specific situations in real time for analysis purposes. The program also collects the addresses from the billions of videos watched daily as well as some user information for analysis purposes.[331][332][333] During the2009 United Nations Climate Change Conferencein Copenhagen, the NSA and itsFive Eyespartners monitored the communications of delegates of numerous countries. This was done to give their own policymakers a negotiating advantage.[334][335] TheCommunications Security Establishment Canada(CSEC) has been tracking Canadian air passengers via freeWi-Fiservices at a major Canadian airport. Passengers who exited the airport terminal continued to be tracked as they showed up at otherWi-Filocations across Canada. In a CSEC document dated May 2012, the agency described how it had gained access to two communications systems with over 300,000 users in order to pinpoint a specific imaginary target. The operation was executed on behalf of the NSA as a trial run to test a new technology capable of tracking down "any target that makes occasional forays into other cities/regions." This technology was subsequently shared with Canada'sFive Eyespartners – Australia, New Zealand, Britain, and the United States.[336][337][338][339] According to research bySüddeutsche Zeitungand TV networkNDRthe mobile phone of former German chancellorGerhard Schröderwas monitored from 2002 onward, reportedly because of his government's opposition tomilitary intervention in Iraq. The source of the latest information is a document leaked byEdward Snowden. The document, containing information about the National Sigint Requirement List (NSRL), had previously been interpreted as referring only toAngela Merkel's mobile. However,Süddeutsche Zeitungand NDR claim to have confirmation from NSA insiders that the surveillance authorisation pertains not to the individual, but the political post – which in 2002 was still held by Schröder. According to research by the two media outlets, Schröder was placed as number 388 on the list, which contains the names of persons and institutions to be put under surveillance by the NSA.[340][341][342][343] GCHQ launched acyber-attackon the activist network "Anonymous", usingdenial-of-service attack(DoS) to shut down a chatroom frequented by the network's members and to spy on them. The attack, dubbed Rolling Thunder, was conducted by a GCHQ unit known as theJoint Threat Research Intelligence Group(JTRIG). The unit successfully uncovered the true identities of several Anonymous members.[344][345][346][347] The NSA Section 215 bulk telephony metadata program which seeks to stockpile records on all calls made in the U.S. is collecting less than 30 percent of all Americans' call records because of an inability to keep pace with the explosion in cellphone use, according toThe Washington Post. The controversial program permits the NSA after a warrant granted by the secret Foreign Intelligence Surveillance Court to record numbers, length and location of every call from the participating carriers.[348][349] The Interceptreported that the U.S. government is using primarily NSA surveillance to target people for drone strikes overseas. In its reportThe Interceptauthor detail the flawed methods which are used to locate targets for lethal drone strikes, resulting in the deaths of innocent people.[350]According to the Washington Post NSA analysts and collectors i.e. NSA personnel which controls electronic surveillance equipment use the NSA's sophisticated surveillance capabilities to track individual targets geographically and in real time, while drones and tactical units aimed their weaponry against those targets to take them out.[351] An unnamed US law firm, reported to beMayer Brown, was targeted by Australia'sASD. According to Snowden's documents, the ASD had offered to hand over these intercepted communications to the NSA. This allowed government authorities to be "able to continue to cover the talks, providing highly useful intelligence for interested US customers".[352][353] NSA and GCHQ documents revealed that the anti-secrecy organizationWikiLeaksand otheractivist groupswere targeted for government surveillance and criminal prosecution. In particular, theIP addressesof visitors to WikiLeaks were collected in real time, and the US government urged its allies to file criminal charges against the founder of WikiLeaks,Julian Assange, due to his organization's publication of theAfghanistan war logs. The WikiLeaks organization was designated as a "malicious foreign actor".[354] Quoting an unnamed NSA official in Germany,Bild am Sonntagreported that while President Obama's order to stop spying on Merkel was being obeyed, the focus had shifted to bugging other leading government and business figures including Interior MinisterThomas de Maiziere, a close confidant of Merkel. Caitlin Hayden, a security adviser to President Obama, was quoted in the newspaper report as saying, "The US has made clear it gathers intelligence in exactly the same way as any other states."[355][356] The Interceptreveals that government agencies are infiltrating online communities and engaging in "false flag operations" to discredit targets among them people who have nothing to do with terrorism or national security threats. The two main tactics that are currently used are the injection of all sorts of false material onto the internet in order to destroy the reputation of its targets; and the use of social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable.[357][358][359][360] The Guardian reported that Britain's surveillance agency GCHQ, with aid from the National Security Agency, intercepted and stored the webcam images of millions of internet users not suspected of wrongdoing. The surveillance program codenamedOptic Nervecollected still images of Yahoo webcam chats (one image every five minutes) in bulk and saved them to agency databases. The agency discovered "that a surprising number of people use webcam conversations to show intimate parts of their body to the other person", estimating that between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains "undesirable nudity".[361] The NSA has built an infrastructure which enables it to covertly hack into computers on a mass scale by using automated systems that reduce the level of human oversight in the process. The NSA relies on an automated system codenamedTURBINEwhich in essence enables the automated management and control of a large network of implants (a form of remotely transmitted malware on selected individual computer devices or in bulk on tens of thousands of devices). As quoted byThe Intercept, TURBINE is designed to "allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually."[362]The NSA has shared many of its files on the use of implants with its counterparts in the so-called Five Eyes surveillance alliance – the United Kingdom, Canada, New Zealand, and Australia. Among other things due to TURBINE and its control over the implants the NSA is capable of: The TURBINE implants are linked to, and relies upon, a large network of clandestine surveillance "sensors" that the NSA has installed at locations across the world, including the agency's headquarters in Maryland and eavesdropping bases used by the agency in Misawa, Japan and Menwith Hill, England. Codenamed as TURMOIL, the sensors operate as a sort of high-tech surveillance dragnet, monitoring packets of data as they are sent across the Internet. When TURBINE implants exfiltrate data from infected computer systems, the TURMOIL sensors automatically identify the data and return it to the NSA for analysis. And when targets are communicating, the TURMOIL system can be used to send alerts or "tips" to TURBINE, enabling the initiation of a malware attack. To identify surveillance targets, the NSA uses a series of data "selectors" as they flow across Internet cables. These selectors can include email addresses, IP addresses, or the unique "cookies" containing a username or other identifying information that are sent to a user's computer by websites such as Google, Facebook, Hotmail, Yahoo, and Twitter, unique Google advertising cookies that track browsing habits, unique encryption key fingerprints that can be traced to a specific user, and computer IDs that are sent across the Internet when a Windows computer crashes or updates.[363][364][365][366] The CIA was accused by U.S. Senate Intelligence Committee Chairwoman Dianne Feinstein of spying on a stand-alone computer network established for the committee in its investigation of allegations of CIA abuse in a George W. Bush-era detention and interrogation program.[367] A voice interception program codenamedMYSTICbegan in 2009. Along with RETRO, short for "retrospective retrieval" (RETRO is voice audio recording buffer that allows retrieval of captured content up to 30 days into the past), the MYSTIC program is capable of recording "100 percent" of a foreign country's telephone calls, enabling the NSA to rewind and review conversations up to 30 days and the relating metadata. With the capability to store up to 30 days of recorded conversations MYSTIC enables the NSA to pull an instant history of the person's movements, associates and plans.[368][369][370][371][372][373] On March 21,Le Mondepublished slides from an internal presentation of theCommunications Security Establishment Canada, which attributed a piece of malicious software to French intelligence. The CSEC presentation concluded that the list of malware victims matched French intelligence priorities and found French cultural reference in the malware's code, including the nameBabar, a popular French children's character, and the developer name "Titi".[374] The French telecommunications corporationOrange S.A.shares its call data with the French intelligence agency DGSE, which hands over the intercepted data to GCHQ.[375] The NSA has spied on the Chinese technology companyHuawei.[376][377][378]Huawei is a leading manufacturer of smartphones, tablets, mobile phone infrastructure, and WLAN routers and installs fiber optic cable. According toDer Spiegelthis "kind of technology ... is decisive in the NSA's battle for data supremacy."[379]The NSA, in an operation named "Shotgiant", was able to access Huawei's email archive and the source code for Huawei's communications products.[379]The US government has had longstanding concerns that Huawei may not be independent of thePeople's Liberation Armyand that the Chinese government might use equipment manufactured by Huawei to conduct cyberespionage or cyberwarfare. The goals of the NSA operation were to assess the relationship between Huawei and the PLA, to learn more the Chinese government's plans and to use information from Huawei to spy on Huawei's customers, including Iran, Afghanistan, Pakistan, Kenya, and Cuba. Former Chinese PresidentHu Jintao, the Chinese Trade Ministry, banks, as well as telecommunications companies were also targeted by the NSA.[376][379] The Interceptpublished a document of an NSA employee discussing how to build a database of IP addresses, webmail, and Facebook accounts associated with system administrators so that the NSA can gain access to the networks and systems they administer.[380][381] At the end of March 2014,Der SpiegelandThe Interceptpublished, based on a series of classified files from the archive provided to reporters by NSA whistleblower Edward Snowden, articles related to espionage efforts by GCHQ and NSA in Germany.[382][383]The British GCHQ targeted three German internet firms for information about Internet traffic passing through internet exchange points, important customers of the German internet providers, their technology suppliers as well as future technical trends in their business sector and company employees.[382][383]The NSA was granted by theForeign Intelligence Surveillance Courtthe authority for blanket surveillance of Germany, its people and institutions, regardless whether those affected are suspected of having committed an offense or not, without an individualized court order specifying on March 7, 2013.[383]In addition Germany's chancellor Angela Merkel was listed in a surveillance search machine and database named Nymrod along with 121 others foreign leaders.[382][383]AsThe Interceptwrote: "The NSA uses the Nymrod system to 'find information relating to targets that would otherwise be tough to track down,' according to internal NSA documents. Nymrod sifts through secret reports based on intercepted communications as well as full transcripts of faxes, phone calls, and communications collected from computer systems. More than 300 'cites' for Merkel are listed as available in intelligence reports and transcripts for NSA operatives to read."[382] Toward the end of April, Edward Snowden said that the United States surveillance agencies spy on Americans more than anyone else in the world, contrary to anything that has been said by the government up until this point.[384] An article published by Ars Technica shows NSA's Tailored Access Operations (TAO) employees intercepting a Cisco router.[385] The InterceptandWikiLeaksrevealed information about which countries were having their communications collected as part of theMYSTICsurveillance program. On May 19,The Interceptreported that the NSA is recording and archiving nearly every cell phone conversation in the Bahamas with a system called SOMALGET, a subprogram ofMYSTIC. The mass surveillance has been occurring without the Bahamian government's permission.[386]Aside from the Bahamas,The Interceptreported NSA interception of cell phone metadata inKenya, thePhilippines,Mexico, and a fifth country it did not name due to "credible concerns that doing so could lead to increased violence." WikiLeaks released a statement on May 23 claiming thatAfghanistanwas the unnamed nation.[387] In a statement responding to the revelations, the NSA said "the implication that NSA's foreign intelligence collection is arbitrary and unconstrained is false."[386] Through its global surveillance operations the NSA exploits the flood of images included in emails, text messages, social media, videoconferences and other communications to harvest millions of images. These images are then used by the NSA in sophisticatedfacial recognition programsto track suspected terrorists and other intelligence targets.[388] Vodafonerevealed that there were secret wires that allowed government agencies direct access to their networks.[389]This access does not require warrants and the direct access wire is often equipment in a locked room.[389]In six countries where Vodafone operates, the law requires telecommunication companies to install such access or allows governments to do so.[389]Vodafone did not name these countries in case some governments retaliated by imprisoning their staff.[389]Shami ChakrabartiofLibertysaid "For governments to access phone calls at the flick of a switch is unprecedented and terrifying. Snowden revealed the internet was already treated as fair game. Bluster that all is well is wearing pretty thin – our analogue laws need a digital overhaul."[389]Vodafone published its first Law Enforcement Disclosure Report on June 6, 2014.[389]Vodafone group privacy officer Stephen Deadman said "These pipes exist, the direct access model exists. We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used."[389]Gus Hosein, director ofPrivacy Internationalsaid "I never thought the telcos would be so complicit. It's a brave step by Vodafone and hopefully the other telcos will become more brave with disclosure, but what we need is for them to be braver about fighting back against the illegal requests and the laws themselves."[389] Above-top-secretdocumentation of a covert surveillance program named Overseas Processing Centre 1 (OPC-1) (codenamed "CIRCUIT") byGCHQwas published byThe Register. Based on documents leaked by Edward Snowden, GCHQ taps into undersea fiber optic cables via secret spy bases near theStrait of Hormuzand Yemen.BTandVodafoneare implicated.[390] The Danish newspaperDagbladet InformationandThe Interceptrevealed on June 19, 2014, the NSA mass surveillance program codenamedRAMPART-A. Under RAMPART-A, 'third party' countries tap into fiber optic cables carrying the majority of the world's electronic communications and are secretly allowing the NSA to install surveillance equipment on these fiber-optic cables. The foreign partners of the NSA turn massive amounts of data like the content of phone calls, faxes, e-mails, internet chats, data from virtual private networks, and calls made using Voice over IP software like Skype over to the NSA. In return these partners receive access to the NSA's sophisticated surveillance equipment so that they too can spy on the mass of data that flows in and out of their territory. Among the partners participating in the NSA mass surveillance program are Denmark and Germany.[391][392][393] During the week of July 4, a 31-year-old male employee ofGermany's intelligence serviceBNDwas arrested on suspicion ofspyingfor theUnited States. The employee is suspected of spying on theGerman Parliamentary Committee investigating the NSA spying scandal.[394] Former NSA official and whistleblowerWilliam Binneyspoke at aCentre for Investigative Journalismconference in London. According to Binney, "at least 80% of all audio calls, not just metadata, are recorded and stored in the US. The NSA lies about what it stores." He also stated that the majority offiber optic cablesrun through the U.S., which "is no accident and allows the US to view all communication coming in."[395] The Washington Postreleased a review of a cache provided by Snowden containing roughly 160,000 text messages and e-mails intercepted by the NSA between 2009 and 2012. The newspaper concluded that nine out of ten account holders whose conversations were recorded by the agency "were not the intended surveillance targets but were caught in a net the agency had cast for somebody else." In its analysis,The Postalso noted that many of the account holders were Americans.[396] On July 9, a soldier working withinGermany's Federal Ministry of Defence(BMVg) fell under suspicion of spying for the United States.[397]As a result of the July 4 case and this one, the German government expelled the CIA station chief in Germany on July 17.[398] On July 18, former State Department officialJohn Tyereleased an editorial inThe Washington Post, highlighting concerns over data collection underExecutive Order 12333. Tye's concerns are rooted in classified material he had access to through the State Department, though he has not publicly released any classified materials.[399] The Interceptreported that the NSA is "secretly providing data to nearly two dozen U.S. government agencies with a 'Google-like' search engine" called ICREACH. The database,The Interceptreported, is accessible to domestic law enforcement agencies including the FBI and theDrug Enforcement Administrationand was built to contain more than 850 billion metadata records about phone calls, emails, cellphone locations, and text messages.[400][401] Based on documents obtained from Snowden,The Interceptreported that theNSAandGCHQhad broken into the internal computer network ofGemaltoand stolen the encryption keys that are used inSIM cardsno later than 2010. As of 2015[update], the company is the world's largest manufacturer of SIM cards, making about two billion cards a year. With the keys, the intelligence agencies could eavesdrop on cell phones without the knowledge of mobile phone operators or foreign governments.[402] The New Zealand Herald, in partnership withThe Intercept, revealed that the New Zealand government used XKeyscore to spy on candidates for the position ofWorld Trade Organizationdirector general[403]and also members of theSolomon Islandsgovernment.[404] In January 2015, theDEArevealed that it had been collecting metadata records for all telephone calls made by Americans to 116 countries linked to drug trafficking. The DEA's program was separate from the telephony metadata programs run by the NSA.[405]In April,USA Todayreported that the DEA's data collection program began in 1992 and included all telephone calls between the United States and from Canada and Mexico. Current and former DEA officials described the program as the precursor of the NSA's similar programs.[406]The DEA said its program was suspended in September 2013, after a review of the NSA's programs and that it was "ultimately terminated."[405] Snowden provided journalists atThe Interceptwith GCHQ documents regarding another secret program "Karma Police", calling itself "the world's biggest" data mining operation, formed to create profiles on every visibleInternet user'sbrowsing habits. By 2009 it had stored over 1.1 trillion web browsing sessions, and by 2012 was recording 50 billion sessions per day.[407] In March 2017,WikiLeakspublished more than 8,000 documents on theCIA. The confidential documents, codenamedVault 7, dated from 2013 to 2016, included details on the CIA's hacking capabilities, such as the ability to compromisecars,smart TVs,[411]web browsers(includingGoogle Chrome,Microsoft Edge,Firefox, andOpera),[412][413]and the operating systems of mostsmartphones(includingApple'siOSandGoogle'sAndroid), as well as otheroperating systemssuch asMicrosoft Windows,macOS, andLinux.[414]WikiLeaks did not name the source, but said that the files had "circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."[411] The disclosure provided impetus for the creation ofsocial movementsagainst mass surveillance, such asRestore the Fourth, and actions likeStop Watching UsandThe Day We Fight Back. On the legal front, theElectronic Frontier Foundationjoined acoalitionof diverse groups filing suit against the NSA. Severalhuman rights organizationsurged theObama administrationnot to prosecute, but protect, "whistleblowerSnowden":Amnesty International,Human Rights Watch,Transparency International, and theIndex on Censorship, among others.[419][420][421][422]On the economic front, several consumer surveys registered a drop in online shopping and banking activity as a result of the Snowden revelations.[423] However, it is argued long-term impact among the general population is negligible, as "the general public has still failed to adopt privacy-enhancing tools en masse."[424]A research study that tracked the interest in privacy-related webpages following the incident found that the public's interest reduced quickly, despite continuous discussion by the media about the events.[425] Domestically, PresidentBarack Obamaclaimed that there is "no spying on Americans",[426][427]andWhite HousePress SecretaryJay Carneyasserted that the surveillance programs revealed by Snowden have been authorized by Congress.[428] On the international front, U.S. Attorney GeneralEric Holderstated that "we cannot target even foreign persons overseas without a valid foreign intelligence purpose."[429] Prime MinisterDavid Cameronwarned journalists that "if they don't demonstrate some social responsibility it will be very difficult for government to stand back and not to act."[430]Deputy Prime MinisterNick Cleggemphasized that the media should "absolutely defend the principle of secrecy for the intelligence agencies".[431] Foreign SecretaryWilliam Hagueclaimed that "we take great care to balance individual privacy with our duty to safeguard the public and UK national security."[432]Hague defended theFive Eyesalliance and reiterated that the British-U.S. intelligence relationship must not be endangered because it "saved many lives".[433] Former Prime MinisterTony Abbottstated that "every Australian governmental agency, every Australian official at home and abroad, operates in accordance with the law".[434]Abbott criticized theAustralian Broadcasting Corporationfor being unpatriotic due to its reporting on the documents provided by Snowden, whom Abbott described as a "traitor".[435][436]Foreign MinisterJulie Bishopalso denounced Snowden as a traitor and accused him of "unprecedented" treachery.[437]Bishop defended theFive Eyesalliance and reiterated that the Australian–U.S. intelligence relationship must not be endangered because it "saves lives".[438] Chinese policymakers became increasingly concerned about the risk of cyberattacks following the disclosures, which demonstrated extensiveUnited States intelligence activities in China.[439]: 129As part of its response, theCommunist Partyin 2014 formed the Cybersecurity and InformationLeading Group.[439]: 129 In July 2013, ChancellorAngela Merkeldefended the surveillance practices of the NSA, and described the United States as "our truest ally throughout the decades".[440][441]After the NSA's surveillance on Merkel was revealed, however, the Chancellor compared the NSA with theStasi.[442]According toThe Guardian,Berlinis using the controversy over NSA spying as leverage to enter the exclusiveFive Eyesalliance.[443] Interior MinisterHans-Peter Friedrichstated that "the Americans take ourdata privacyconcerns seriously."[444]Testifying before theGerman Parliament, Friedrich defended the NSA's surveillance, and cited five terrorist plots on German soil that were prevented because of the NSA.[445]However, in April 2014, another German interior minister criticized the United States for failing to provide sufficient assurances to Germany that it had reined in its spying tactics.Thomas de Maiziere, a close ally of Merkel, toldDer Spiegel: "U.S. intelligence methods may be justified to a large extent by security needs, but the tactics are excessive and over-the-top."[446] Minister for Foreign AffairsCarl Bildt, defended theFRAand described its surveillance practices as a "national necessity".[447]Minister for DefenceKarin Enströmsaid that Sweden's intelligence exchange with other countries is "critical for our security" and that "intelligence operations occur within a framework with clear legislation, strict controls and under parliamentary oversight."[448][449] Interior MinisterRonald Plasterkapologized for incorrectly claiming that the NSA had collected 1.8 million records of metadata in the Netherlands. Plasterk acknowledged that it was in fact Dutch intelligence services who collected the records and transferred them to the NSA.[450][451] The Danish Prime MinisterHelle Thorning-Schmidthas praised the American intelligence agencies, claiming they have prevented terrorist attacks in Denmark, and expressed her personal belief that the Danish people "should be grateful" for the Americans' surveillance.[452]She has later claimed that the Danish authorities have no basis for assuming that American intelligence agencies have performed illegal spying activities toward Denmark or Danish interests.[453] In July 2013, the German government announced an extensive review of German intelligence services.[454][455] In August 2013, the U.S. government announced an extensive review of U.S. intelligence services.[456][457] In October 2013, the British government announced an extensive review of British intelligence services.[458] In December 2013, the Canadian government announced an extensive review of Canadian intelligence services.[459] In January 2014, U.S. PresidentBarack Obamasaid that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzclaimed that "the NSA has acted far more responsibly than the claims made by the leakers and publicized by the press." In Wilentz' view "The leakers have gone far beyond justifiably blowing the whistle on abusive programs. In addition to their alarmism about [U.S.] domestic surveillance, many of the Snowden documents released thus far have had nothing whatsoever to do with domestic surveillance."[20]Edward Lucas, former Moscow bureau chief forThe Economist, agreed, asserting that "Snowden's revelations neatly and suspiciously fits the interests of one country: Russia" and citingMasha Gessen's statement that "The Russian propaganda machine has not gotten this much mileage out of a US citizen sinceAngela Davis's murder trial in 1971."[460] Bob Cescaobjected toThe New York Timesfailing to redact the name of an NSA employee and the specific location where anal Qaedagroup was being targeted in a series of slides the paper made publicly available.[461] Russian journalistAndrei Soldatovargued that Snowden's revelations had had negative consequences for internet freedom in Russia, as Russian authorities increased their own surveillance and regulation on the pretext of protecting the privacy of Russian users. Snowden's name was invoked by Russian legislators who supported measures forcing platforms such asGoogle,Facebook,Twitter,Gmail, andYouTubeto locate their servers on Russian soil or installSORMblack boxes on their servers so that Russian authorities could control them.[462]Soldatov also contended that as a result of the disclosures, international support for having national governments take over the powers of the organizations involved in coordinating the Internet's global architectures had grown, which could lead to a Balkanization of the Internet that restricted free access to information.[463]TheMontevideo Statement on the Future of Internet Cooperationissued in October 2013, byICANNand other organizations warned against "Internet fragmentation at a national level" and expressed "strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations".[464] In late 2014,Freedom Housesaid "[s]ome states are using the revelations of widespread surveillance by the U.S. National Security Agency (NSA) as an excuse to augment their own monitoring capabilities, frequently with little or no oversight, and often aimed at the political opposition and human rights activists."[465] The material consisted of:
https://en.wikipedia.org/wiki/2013_global_surveillance_disclosures
Analogical modeling(AM) is a formal theory ofexemplarbased analogical reasoning, proposed byRoyal Skousen, professor of Linguistics and English language atBrigham Young UniversityinProvo, Utah. It is applicable to language modeling and other categorization tasks. Analogical modeling is related toconnectionismandnearest neighborapproaches, in that it is data-based rather than abstraction-based; but it is distinguished by its ability to cope with imperfect datasets (such as caused by simulated short term memory limits) and to base predictions on all relevant segments of the dataset, whether near or far. In language modeling, AM has successfully predicted empirically valid forms for which no theoretical explanation was known (see the discussion of Finnish morphology in Skousen et al. 2002). An exemplar-based model consists of ageneral-purpose modelingengine and a problem-specific dataset. Within the dataset, each exemplar (a case to be reasoned from, or an informative past experience) appears as a feature vector: a row of values for the set of parameters that define the problem. For example, in a spelling-to-sound task, the feature vector might consist of the letters of a word. Each exemplar in the dataset is stored with an outcome, such as a phoneme or phone to be generated. When the model is presented with a novel situation (in the form of an outcome-less feature vector), the engine algorithmically sorts the dataset to find exemplars that helpfully resemble it, and selects one, whose outcome is the model's prediction. The particulars of the algorithm distinguish one exemplar-based modeling system from another. In AM, we think of the feature values as characterizing a context, and the outcome as a behavior that occurs within that context. Accordingly, the novel situation is known as thegiven context.Given the known features of the context, the AM engine systematically generates all contexts that include it (all of itssupracontexts), and extracts from the dataset the exemplars that belong to each. The engine then discards those supracontexts whose outcomes areinconsistent(this measure of consistency will be discussed further below), leaving ananalogical setof supracontexts, and probabilistically selects an exemplar from the analogical set with a bias toward those in large supracontexts. This multilevel search exponentially magnifies the likelihood of a behavior's being predicted as it occurs reliably in settings that specifically resemble the given context. AM performs the same process for each case it is asked to evaluate. The given context, consisting of n variables, is used as a template to generate2n{\displaystyle 2^{n}}supracontexts. Each supracontext is a set of exemplars in which one or more variables have the same values that they do in the given context, and the other variables are ignored. In effect, each is a view of the data, created by filtering for some criteria of similarity to the given context, and the total set of supracontexts exhausts all such views. Alternatively, each supracontext is a theory of the task or a proposed rule whose predictive power needs to be evaluated. It is important to note that the supracontexts are not equal peers one with another; they are arranged by their distance from the given context, forming a hierarchy. If a supracontext specifies all of the variables that another one does and more, it is a subcontext of that other one, and it lies closer to the given context. (The hierarchy is not strictly branching; each supracontext can itself be a subcontext of several others, and can have several subcontexts.) This hierarchy becomes significant in the next step of the algorithm. The engine now chooses the analogical set from among the supracontexts. A supracontext may contain exemplars that only exhibit one behavior; it is deterministically homogeneous and is included. It is a view of the data that displays regularity, or a relevant theory that has never yet been disproven. A supracontext may exhibit several behaviors, but contain no exemplars that occur in any more specific supracontext (that is, in any of its subcontexts); in this case it is non-deterministically homogeneous and is included. Here there is no great evidence that a systematic behavior occurs, but also no counterargument. Finally, a supracontext may be heterogeneous, meaning that it exhibits behaviors that are found in a subcontext (closer to the given context), and also behaviors that are not. Where the ambiguous behavior of the nondeterministically homogeneous supracontext was accepted, this is rejected because the intervening subcontext demonstrates that there is a better theory to be found. The heterogeneous supracontext is therefore excluded. This guarantees that we see an increase in meaningfully consistent behavior in the analogical set as we approach the given context. With the analogical set chosen, each appearance of an exemplar (for a given exemplar may appear in several of the analogical supracontexts) is given a pointer to every other appearance of an exemplar within its supracontexts. One of these pointers is then selected at random and followed, and the exemplar to which it points provides the outcome. This gives each supracontext an importance proportional to the square of its size, and makes each exemplar likely to be selected in direct proportion to the sum of the sizes of all analogically consistent supracontexts in which it appears. Then, of course, the probability of predicting a particular outcome is proportional to the summed probabilities of all the exemplars that support it. (Skousen 2002, in Skousen et al. 2002, pp. 11–25, and Skousen 2003, both passim) Given a context withn{\displaystyle n}elements: This terminology is best understood through an example. In the example used in the second chapter of Skousen (1989), each context consists of three variables with potential values 0-3 The two outcomes for the dataset areeandr, and the exemplars are: We define a network of pointers like so: The solid lines represent pointers between exemplars with matching outcomes; the dotted lines represent pointers between exemplars with non-matching outcomes. The statistics for this example are as follows: Behavior can only be predicted for a given context; in this example, let us predict the outcome for the context "3 1 2". To do this, we first find all of the contexts containing the given context; these contexts are called supracontexts. We find the supracontexts by systematically eliminating the variables in the given context; withmvariables, there will generally be2m{\displaystyle 2^{m}}supracontexts. The following table lists each of the sub- and supracontexts;xmeans "not x", and-means "anything". These contexts are shown in the venn diagram below: The next step is to determine which exemplars belong to which contexts in order to determine which of the contexts are homogeneous. The table below shows each of the subcontexts, their behavior in terms of the given exemplars, and the number of disagreements within the behavior: Analyzing the subcontexts in the table above, we see that there is only 1 subcontext with any disagreements: "3 12", which in the dataset consists of "3 1 0 e" and "3 1 1 r". There are 2 disagreements in this subcontext; 1 pointing from each of the exemplars to the other (see the pointer network pictured above). Therefore, only supracontexts containing this subcontext will contain any disagreements. We use a simple rule to identify the homogeneous supracontexts: If the number if disagreements in the supracontext is greater than the number of disagreements in the contained subcontext, we say that it is heterogeneous; otherwise, it is homogeneous. There are 3 situations that produce a homogeneous supracontext: The only two heterogeneous supracontexts are "- 1 -" and "- - -". In both of them, it is the combination of the non-deterministic "3 12" with other subcontexts containing theroutcome which causes the heterogeneity. There is actually a 4th type of homogeneous supracontext: it contains more than one non-empty subcontext and it is non-deterministic, but the frequency of outcomes in each sub-context is exactly the same. Analogical modeling does not consider this situation, however, for 2 reasons: Next we construct the analogical set, which consists of all of the pointers and outcomes from the homogeneous supracontexts. The figure below shows the pointer network with the homogeneous contexts highlighted. The pointers are summarized in the following table: 4 of the pointers in the analogical set are associated with the outcomee, and the other 9 are associated withr. In AM, a pointer is randomly selected and the outcome it points to is predicted. With a total of 13 pointers, the probability of the outcomeebeing predicted is 4/13 or 30.8%, and for outcomerit is 9/13 or 69.2%. We can create a more detailed account by listing the pointers for each of the occurrences in the homogeneous supracontexts: We can then see theanalogical effectof each of the instances in the data set. Analogy has been considered useful in describing language at least since the time ofSaussure.Noam Chomskyand others have more recently criticized analogy as too vague to really be useful (Bańko 1991), an appeal to adeus ex machina.Skousen's proposal appears to address that criticism by proposing an explicit mechanism for analogy, which can be tested for psychological validity. Analogical modeling has been employed in experiments ranging fromphonologyandmorphology (linguistics)toorthographyandsyntax. Though analogical modeling aims to create a model free from rules seen as contrived by linguists, in its current form it still requires researchers to select which variables to take into consideration. This is necessary because of the so-called "exponential explosion" of processing power requirements of the computer software used to implement analogical modeling. Recent research suggests thatquantum computingcould provide the solution to such performance bottlenecks (Skousen et al. 2002, see pp 45–47).
https://en.wikipedia.org/wiki/Analogical_modeling
This is a list ofwiki softwareprograms. They are grouped by use case: standard wiki programs, personal wiki programs, hosted-only wikis, wiki-based content management software, and wiki-based project management software. They are further subdivided by the language of implementation: JavaScript, Java,PHP, Python,Perl,Ruby, and so on. There are also wiki applications designed for personal use,[3]apps for mobile use,[4]and apps for use fromUSB flash drives.[5]They often include more features than traditional wikis, including: A list of such software:
https://en.wikipedia.org/wiki/Personal_wiki
Aforeign language writing aidis acomputer programor any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Assisted aspects of writing include:lexical,syntactic(syntactic and semantic roles of a word's frame),lexical semantic(context/collocation-influenced word choice and user-intention-drivensynonymchoice) andidiomaticexpression transfer, etc. Different types offoreign languagewriting aids include automated proofreading applications,text corpora,dictionaries,translationaids andorthographyaids. The four major components in the acquisition of a language are namely;listening,speaking,readingandwriting.[1]While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.[2] Many language learners experienceforeign language anxiety, feelings of apprehensiveness and nervousness, when learning a second language.[1]In the case of writing in a foreign language, this anxiety can be alleviated via foreign language writing aids as they assist non-native language users in independently producing decent written work at their own pace, hence increasing confidence about themselves and their own learning abilities.[3] With advancements in technology, aids in foreign language writing are no longer restricted to traditional mediums such as teacher feedback and dictionaries. Known ascomputer-assisted language learning(CALL), use of computers in language classrooms has become more common, and one example would be the use ofword processorsto assist learners of a foreign language in the technical aspects of their writing, such asgrammar.[4]In comparison with correction feedback from the teacher, the use of word processors is found to be a better tool in improving the writing skills of students who are learningEnglish as a foreign language(EFL), possibly because students find it more encouraging to learn their mistakes from a neutral and detached source.[3]Apart from learners' confidence in writing, their motivation and attitudes will also improve through the use of computers.[2] Foreign language learners' awareness of the conventions in writing can be improved through reference to guidelines showing the features and structure of the target genre.[2]At the same time, interactions and feedback help to engage the learners and expedite their learning, especially with active participation.[5]In online writing situations, learners are isolated without face-to-face interaction with others. Therefore, a foreign language writing aid should provide interaction and feedback so as to ease the learning process. This complementscommunicative language teaching(CLT); which is a teaching approach that highlights interaction as both the means and aim of learning a language. In accordance with the simple view of writing, both lower-order and higher-order skills are required. Lower-order skills involve those ofspellingandtranscription, whereas higher order-skills involve that of ideation; which refers to idea generation and organisation.[6]Proofreading is helpful for non-native language users in minimising errors while writing in a foreign language.Spell checkersandgrammar checkersare two applications that aid in the automatic proofreading process of written work.[7] To achieve writing competence in a non-native language, especially in an alphabetic language, spelling proficiency is of utmost importance.[8]Spelling proficiency has been identified as a good indicator of a learner’s acquisition and comprehension of alphabetic principles in the target language.[9]Documented data on misspelling patterns indicate that majority of misspellings fall under the four categories of letter insertion, deletion, transposition and substitution.[10]In languages where pronunciation of certain sequences of letters may be similar, misspellings may occur when the non-native language learner relies heavily on the sounds of the target language because they are unsure about the accurate spelling of the words.[11]The spell checker application is a type of writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.[12] In general, spell checkers can operate one of two modes, the interactive spell checking mode or the batch spell checking.[7]In the interactive mode, the spell checker detects and marks misspelled words with a squiggly underlining as the words are being typed. On the other hand, batch spell checking is performed on a batch-by-batch basis as the appropriate command is entered. Spell checkers, such as those used inMicrosoft Word, can operate in either mode. Although spell checkers are commonplace in numerous software products, errors specifically made by learners of a target language may not be sufficiently catered for.[13]This is because generic spell checkers function on the assumption that their users are competent speakers of the target language, whose misspellings are primarily due to accidental typographical errors.[14]The majority of misspellings were found to be attributed to systematic competence errors instead of accidental typographical ones, with up to 48% of these errors failing to be detected or corrected by the generic spell checker used.[15] In view of the deficiency of generic spell checkers, programs have been designed to gear towards non-native misspellings,[14]such as FipsCor and Spengels. In FipsCor, a combination of methods, such as the alpha-code method, phonological reinterpretation method and morphological treatment method, has been adopted in an attempt to create a spell checker tailored to French language learners.[11]On the other hand, Spengels is a tutoring system developed to aid Dutch children and non-native Dutch writers of English in accurate English spelling.[16] Grammar(syntactical and morphological) competency is another indicator of a non-native speaker’s proficiency in writing in the target language.Grammar checkersare a type of computerised application which non-native speakers can make use of to proofread their writings as such programs endeavor to identify syntactical errors.[17]Grammar and style checking is recognized as one of the seven major applications ofNatural Language Processingand every project in this field aims to build grammar checkers into a writing aid instead of a robust man-machine interface.[17] Currently, grammar checkers are incapable of inspecting the linguistic or even syntactic correctness of text as a whole. They are restricted in their usefulness in that they are only able to check a small fraction of all the possible syntactic structures. Grammar checkers are unable to detect semantic errors in a correctly structured syntax order; i.e. grammar checkers do not register the error when the sentence structure is syntactically correct but semantically meaningless.[18] Although grammar checkers have largely been concentrated on ensuring grammatical writing, majority of them are modelled after native writers, neglecting the needs of non-native language users.[19]Much research have attempted to tailor grammar checkers to the needs of non-native language users. Granska, a Swedish grammar checker, has been greatly worked upon by numerous researchers in the investigation of grammar checking properties for foreign language learners.[19][20]TheUniversidad Nacional de Educación a Distanciahas a computerised grammar checker for native Spanish speakers of EFL to help identify and correct grammatical mistakes without feedback from teachers.[21] Theoretically, the functions of a conventional spell checker can be incorporated into a grammar checker entirely and this is likely the route that the language processing industry is working towards.[18]In reality, internationally available word processors such as Microsoft Word have difficulties combining spell checkers and grammar checkers due to licensing issues; various proofing instrument mechanisms for a certain language would have been licensed under different providers at different times.[18] Electronic corpora in the target language provide non-native language users with authentic examples of language use rather than fixed examples, which may not be reflected in daily interactions.[22]The contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows them to grasp the manner of sentence formation in the target language, enabling effective writing.[23] Concordanceset up through concordancing programs of corpora allow non-native language users to conveniently grasp lexico-grammatical patterns of the target language. Collocational frequencies of words (i.e. word pairings frequencies) provide non-native language users with information about accurate grammar structures which can be used when writing in the target language.[22]Collocational information also enable non-native language users to make clearer distinctions between words and expressions commonly regarded as synonyms. In addition, corpora information about thesemantic prosody; i.e. appropriate choices of words to be used in positive and negative co-texts, is available as reference for non-native language users in writing. The corpora can also be used to check for the acceptability or syntactic "grammaticality" of their written work.[24] A survey conducted onEnglish as a Second Language(ESL) students revealed corpus activities to be generally well received and thought to be especially useful for learning word usage patterns and improving writing skills in the foreign language.[23]It was also found that students' writings became more natural after using two online corpora in a 90-minute training session.[25]In recent years, there were also suggestions to incorporate the applications of corpora into EFL writing courses in China to improve the writing skills of learners.[26] Dictionaries of the target learning languages are commonly recommended to non-native language learners.[27]They serve as reference tools by offering definitions, phonetic spelling, word classes and sample sentences.[22]It was found that the use of a dictionary can help learners of a foreign language write better if they know how to use them.[28]Foreign language learners can make use of grammar-related information from the dictionary to select appropriate words, check the correct spelling of a word and look upsynonymsto add more variety to their writing.[28]Nonetheless, learners have to be careful when using dictionaries as the lexical-semantic information contained in dictionaries might not be sufficient with regards to language production in a particular context and learners may be misled into choosing incorrect words.[29] Presently, many notable dictionaries are available online and basic usage is usually free. These online dictionaries allow learners of a foreign language to find references for a word much faster and more conveniently than with a manual version, thus minimising the disruption to the flow of writing.[30]Online dictionaries available can be found under thelist of online dictionaries. Dictionaries come in different levels of proficiency; such as advanced, intermediate and beginner, which learners can choose accordingly to the level best suited to them. There are many different types of dictionaries available; such as thesaurus or bilingual dictionaries, which cater to the specific needs of a learner of a foreign language. In recent years, there is also specialised dictionaries for foreign language learners that employ natural language processing tools to assist in the compilations of dictionary entries by generating feedback on the vocabulary that learners use and automatically providing inflectional and/or derivational forms for referencing items in the explanations.[31] The wordthesaurusmeans 'treasury' or 'storehouse' in Greek and Latin is used to refer to several varieties of language resources, it is most commonly known as a book that groups words in synonym clusters and related meanings.[32]Its original sense of 'dictionary or encyclopedia' has been overshadowed by the emergence of the Roget-style thesaurus[32]and it is considered as a writing aid as it helps writers with the selection of words.[33]The differences between a Roget-style thesaurus and a dictionary would be the indexing and information given; the words in thesaurus are grouped by meaning, usually without definitions, while the latter is byalphabetical orderwith definitions.[33]When users are unable to find a word in a dictionary, it is usually due to the constraint of searching alphabetically by common and well-known headwords and the use of a thesaurus eliminates this issue by allowing users to search for a word through another word based on concept.[34] Foreign language learners can make use of thesaurus to find near synonyms of a word to expand their vocabulary skills and add variety to their writing. Many word processors are equipped with a basic function of thesaurus, allowing learners to change a word to another similar word with ease. However, learners must be mindful that even if the words are near synonyms, they might not be suitable replacements depending on the context.[33] Spelling dictionaries are referencing materials that specifically aid users in finding the correct spelling of a word. Unlike common dictionaries, spelling dictionaries do not typically provide definitions and other grammar-related information of the words. While typical dictionaries can be used to check or search for correct spellings, new and improved spelling dictionaries can assist users in finding the correct spelling of words even when the user does not know the first alphabet or knows it imperfectly.[35]This circumvents the alphabetic ordering limitations of a classic dictionary.[34]These spelling dictionaries are especially useful to foreign language learners as inclusion of concise definitions and suggestions for commonly confused words help learners to choose the correct spellings of words that sound alike or are pronounced wrongly by them.[35] A personal spelling dictionary, being a collection of a single learner’s regularly misspelled words, is tailored to the individual and can be expanded with new entries that the learner does not know how to spell or contracted when the learner had mastered the words.[36]Learners also use the personal spelling dictionary more than electronic spellcheckers, and additions can be easily made to better enhance it as a learning tool as it can include things like rules for writing and proper nouns, which are not included in electronic spellcheckers.[36]Studies also suggest that personal spelling dictionaries are better tools for learners to improve their spelling as compared to trying to memorize words that are unrelated from lists or books.[37] Current research have shown that language learners utilise dictionaries predominantly to check for meanings and thatbilingual dictionariesare preferred over monolingual dictionaries for these uses.[38]Bilingual dictionaries have proved to be helpful for learners of a new language, although in general, they hold less extensive coverage of information as compared to monolingual dictionaries.[30]Nonetheless, good bilingual dictionaries capitalize on the fact that they are useful for learners to integrate helpful information about commonly known errors, false friends and contrastive predicaments from the two languages.[30] Studies have shown that learners of English have benefited from the use of bilingual dictionaries on their production and comprehension of unknown words.[39]When using bilingual dictionaries, learners also tend to read entries in both native and target languages[39]and this helps them to map the meanings of the target word in the foreign language onto its counterpart in their native language. It was also found that the use of bilingual dictionaries improves the results of translation tasks by learners of ESL, thus showing that language learning can be enhanced with the use of bilingual dictionaries.[40] The use of bilingual dictionaries in foreign language writing tests remains a debate. Some studies support the view that the use of a dictionary in a foreign language examination increases the mean score of the test, and hence is one of the factors that influenced the decision to ban the use of dictionaries in several foreign language tests in the UK.[41]More recent studies, however, present that further research into the use of bilingual dictionaries during writing tests have shown that there is no significant differences in the test scores that can be attributed to the use of a dictionary.[42]Nevertheless, from the perspective of foreign language learners, being able to use a bilingual dictionary during a test is reassuring and increases their confidence.[43] There are many free translation aids online, also known asmachine translation(MT) engines, such asGoogle TranslateandBabel Fish(now defunct), that allow foreign language learners to translate between their native language and the target language quickly and conveniently.[44]Out of the three major categories in computerised translation tools;computer-assisted translation(CAT), Terminology data banks and machine translation. Machine translation is the most ambitious as it is designed to handle the whole process of translation entirely without the intervention of human assistance.[45] Studies have shown that translation into the target language can be used to improve the linguistic proficiency of foreign language learners.[46]Machine translation aids help beginner learners of a foreign language to write more and produce better quality work in the target language; writing directly in the target language without any aid requires more effort on the learners' part, resulting in the difference in quantity and quality.[44] However, teachers advise learners against the use of machine translation aids as output from the machine translation aids are highly misleading and unreliable; producing the wrong answers most of the time.[47]Over-reliance on the aids also hinder the development of learners' writing skills, and is viewed as an act of plagiarism since the language used is technically not produced by the student.[47] Theorthographyof a language is the usage of a specific script to write a language according to a conventionalised usage.[48]One’s ability to read in a language is further enhanced by a concurrent learning of writing.[49]This is because writing is a means of helping the language learner recognise and remember the features of the orthography, which is particularly helpful when the orthography has irregular phonetic-to-spelling mapping.[49]This, in turn, helps the language learner to focus on the components which make up the word.[49] Online orthography aids[50]provide language learners with a step-by-step process on learning how to write characters. These are especially useful for learners of languages withlogographicwriting systems, such as Chinese or Japanese, in which the ordering of strokes for characters are important. Alternatively, tools like Skritter provide an interactive way of learning via a system similar to writing tablets[51][better source needed]albeit on computers, at the same time providing feedback on stroke ordering and progress. Handwriting recognitionis supported on certain programs,[52]which help language learners in learning the orthography of the target language. Practice of orthography is also available in many applications, with tracing systems in place to help learners with stroke orders.[53] Apart from online orthography programs, offline orthography aids for language learners of logographic languages are also available. Character cards, which contain lists of frequently used characters of the target language, serve as a portable form of visual writing aid for language learners of logographic languages who may face difficulties in recalling the writing of certain characters.[54] Studies have shown that tracing logographic characters improves the word recognition abilities of foreign language learners, as well as their ability to map the meanings onto the characters.[55]This, however, does not improve their ability to link pronunciation with characters, which suggests that these learners need more than orthography aids to help them in mastering the language in both writing and speech.[56]
https://en.wikipedia.org/wiki/Foreign_language_writing_aid
Code wordmay refer to:
https://en.wikipedia.org/wiki/Code_word_(disambiguation)
Inmathematics, theoperator normmeasures the "size" of certainlinear operatorsby assigning each areal numbercalled itsoperator norm. Formally, it is anormdefined on the space ofbounded linear operatorsbetween two givennormed vector spaces. Informally, the operator norm‖T‖{\displaystyle \|T\|}of a linear mapT:X→Y{\displaystyle T:X\to Y}is the maximum factor by which it "lengthens" vectors. Given two normed vector spacesV{\displaystyle V}andW{\displaystyle W}(over the same basefield, either thereal numbersR{\displaystyle \mathbb {R} }or thecomplex numbersC{\displaystyle \mathbb {C} }), alinear mapA:V→W{\displaystyle A:V\to W}is continuousif and only ifthere exists a real numberc{\displaystyle c}such that[1]‖Av‖≤c‖v‖for allv∈V.{\displaystyle \|Av\|\leq c\|v\|\quad {\text{ for all }}v\in V.} The norm on the left is the one inW{\displaystyle W}and the norm on the right is the one inV{\displaystyle V}. Intuitively, the continuous operatorA{\displaystyle A}never increases the length of any vector by more than a factor ofc.{\displaystyle c.}Thus theimageof a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known asbounded operators. In order to "measure the size" ofA,{\displaystyle A,}one can take theinfimumof the numbersc{\displaystyle c}such that the above inequality holds for allv∈V.{\displaystyle v\in V.}This number represents the maximum scalar factor by whichA{\displaystyle A}"lengthens" vectors. In other words, the "size" ofA{\displaystyle A}is measured by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm ofA{\displaystyle A}as‖A‖op=inf{c≥0:‖Av‖≤c‖v‖for allv∈V}.{\displaystyle \|A\|_{\text{op}}=\inf\{c\geq 0:\|Av\|\leq c\|v\|{\text{ for all }}v\in V\}.} The infimum is attained as the set of all suchc{\displaystyle c}isclosed,nonempty, andboundedfrom below.[2] It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spacesV{\displaystyle V}andW{\displaystyle W}. Every realm{\displaystyle m}-by-n{\displaystyle n}matrixcorresponds to a linear map fromRn{\displaystyle \mathbb {R} ^{n}}toRm.{\displaystyle \mathbb {R} ^{m}.}Each pair of the plethora of (vector)normsapplicable to real vector spaces induces an operator norm for allm{\displaystyle m}-by-n{\displaystyle n}matrices of real numbers; these induced norms form a subset ofmatrix norms. If we specifically choose theEuclidean normon bothRn{\displaystyle \mathbb {R} ^{n}}andRm,{\displaystyle \mathbb {R} ^{m},}then the matrix norm given to a matrixA{\displaystyle A}is thesquare rootof the largesteigenvalueof the matrixA∗A{\displaystyle A^{*}A}(whereA∗{\displaystyle A^{*}}denotes theconjugate transposeofA{\displaystyle A}).[3]This is equivalent to assigning the largestsingular valueofA.{\displaystyle A.} Passing to a typical infinite-dimensional example, consider thesequence spaceℓ2,{\displaystyle \ell ^{2},}which is anLpspace, defined byℓ2={(an)n≥1:an∈C,∑n|an|2<∞}.{\displaystyle \ell ^{2}=\left\{(a_{n})_{n\geq 1}:\;a_{n}\in \mathbb {C} ,\;\sum _{n}|a_{n}|^{2}<\infty \right\}.} This can be viewed as an infinite-dimensional analogue of theEuclidean spaceCn.{\displaystyle \mathbb {C} ^{n}.}Now consider a bounded sequences∙=(sn)n=1∞.{\displaystyle s_{\bullet }=\left(s_{n}\right)_{n=1}^{\infty }.}The sequences∙{\displaystyle s_{\bullet }}is an element of the spaceℓ∞,{\displaystyle \ell ^{\infty },}with a norm given by‖s∙‖∞=supn|sn|.{\displaystyle \left\|s_{\bullet }\right\|_{\infty }=\sup _{n}\left|s_{n}\right|.} Define an operatorTs{\displaystyle T_{s}}by pointwise multiplication:(an)n=1∞↦Ts(sn⋅an)n=1∞.{\displaystyle \left(a_{n}\right)_{n=1}^{\infty }\;{\stackrel {T_{s}}{\mapsto }}\;\ \left(s_{n}\cdot a_{n}\right)_{n=1}^{\infty }.} The operatorTs{\displaystyle T_{s}}is bounded with operator norm‖Ts‖op=‖s∙‖∞.{\displaystyle \left\|T_{s}\right\|_{\text{op}}=\left\|s_{\bullet }\right\|_{\infty }.} This discussion extends directly to the case whereℓ2{\displaystyle \ell ^{2}}is replaced by a generalLp{\displaystyle L^{p}}space withp>1{\displaystyle p>1}andℓ∞{\displaystyle \ell ^{\infty }}replaced byL∞.{\displaystyle L^{\infty }.} LetA:V→W{\displaystyle A:V\to W}be a linear operator between normed spaces. The first four definitions are always equivalent, and if in additionV≠{0}{\displaystyle V\neq \{0\}}then they are all equivalent: IfV={0}{\displaystyle V=\{0\}}then the sets in the last two rows will be empty, and consequently theirsupremumsover the set[−∞,∞]{\displaystyle [-\infty ,\infty ]}will equal−∞{\displaystyle -\infty }instead of the correct value of0.{\displaystyle 0.}If the supremum is taken over the set[0,∞]{\displaystyle [0,\infty ]}instead, then the supremum of the empty set is0{\displaystyle 0}and the formulas hold for anyV.{\displaystyle V.} Importantly, a linear operatorA:V→W{\displaystyle A:V\to W}is not, in general, guaranteed to achieve its norm‖A‖op=sup{‖Av‖:‖v‖≤1,v∈V}{\displaystyle \|A\|_{\text{op}}=\sup\{\|Av\|:\|v\|\leq 1,v\in V\}}on the closed unit ball{v∈V:‖v‖≤1},{\displaystyle \{v\in V:\|v\|\leq 1\},}meaning that there might not exist any vectoru∈V{\displaystyle u\in V}of norm‖u‖≤1{\displaystyle \|u\|\leq 1}such that‖A‖op=‖Au‖{\displaystyle \|A\|_{\text{op}}=\|Au\|}(if such a vector does exist and ifA≠0,{\displaystyle A\neq 0,}thenu{\displaystyle u}would necessarily have unit norm‖u‖=1{\displaystyle \|u\|=1}). R.C. James provedJames's theoremin 1964, which states that aBanach spaceV{\displaystyle V}isreflexiveif and only if everybounded linear functionalf∈V∗{\displaystyle f\in V^{*}}achieves itsnormon the closed unit ball.[4]It follows, in particular, that every non-reflexive Banach space has some bounded linear functional (a type of bounded linear operator) that does not achieve its norm on the closed unit ball. IfA:V→W{\displaystyle A:V\to W}is bounded then[5]‖A‖op=sup{|w∗(Av)|:‖v‖≤1,‖w∗‖≤1wherev∈V,w∗∈W∗}{\displaystyle \|A\|_{\text{op}}=\sup \left\{\left|w^{*}(Av)\right|:\|v\|\leq 1,\left\|w^{*}\right\|\leq 1{\text{ where }}v\in V,w^{*}\in W^{*}\right\}}and[5]‖A‖op=‖tA‖op{\displaystyle \|A\|_{\text{op}}=\left\|{}^{t}A\right\|_{\text{op}}}wheretA:W∗→V∗{\displaystyle {}^{t}A:W^{*}\to V^{*}}is thetransposeofA:V→W,{\displaystyle A:V\to W,}which is the linear operator defined byw∗↦w∗∘A.{\displaystyle w^{*}\,\mapsto \,w^{*}\circ A.} The operator norm is indeed a norm on the space of allbounded operatorsbetweenV{\displaystyle V}andW{\displaystyle W}. This means‖A‖op≥0and‖A‖op=0if and only ifA=0,{\displaystyle \|A\|_{\text{op}}\geq 0{\mbox{ and }}\|A\|_{\text{op}}=0{\mbox{ if and only if }}A=0,}‖aA‖op=|a|‖A‖opfor every scalara,{\displaystyle \|aA\|_{\text{op}}=|a|\|A\|_{\text{op}}{\mbox{ for every scalar }}a,}‖A+B‖op≤‖A‖op+‖B‖op.{\displaystyle \|A+B\|_{\text{op}}\leq \|A\|_{\text{op}}+\|B\|_{\text{op}}.} The following inequality is an immediate consequence of the definition:‖Av‖≤‖A‖op‖v‖for everyv∈V.{\displaystyle \|Av\|\leq \|A\|_{\text{op}}\|v\|\ {\mbox{ for every }}\ v\in V.} The operator norm is also compatible with the composition, or multiplication, of operators: ifV{\displaystyle V},W{\displaystyle W}andX{\displaystyle X}are three normed spaces over the same base field, andA:V→W{\displaystyle A:V\to W}andB:W→X{\displaystyle B:W\to X}are two bounded operators, then it is asub-multiplicative norm, that is:‖BA‖op≤‖B‖op‖A‖op.{\displaystyle \|BA\|_{\text{op}}\leq \|B\|_{\text{op}}\|A\|_{\text{op}}.} For bounded operators onV{\displaystyle V}, this implies that operator multiplication is jointly continuous. It follows from the definition that if a sequence of operators converges in operator norm, itconverges uniformlyon bounded sets. By choosing different norms for the codomain, used in computing‖Av‖{\displaystyle \|Av\|}, and the domain, used in computing‖v‖{\displaystyle \|v\|}, we obtain different values for the operator norm. Some common operator norms are easy to calculate, and others areNP-hard. Except for the NP-hard norms, all these norms can be calculated inN2{\displaystyle N^{2}}operations (for anN×N{\displaystyle N\times N}matrix), with the exception of theℓ2−ℓ2{\displaystyle \ell _{2}-\ell _{2}}norm (which requiresN3{\displaystyle N^{3}}operations for the exact answer, or fewer if you approximate it with thepower methodorLanczos iterations). The norm of theadjointor transpose can be computed as follows. We have that for anyp,q,{\displaystyle p,q,}then‖A‖p→q=‖A∗‖q′→p′{\displaystyle \|A\|_{p\rightarrow q}=\|A^{*}\|_{q'\rightarrow p'}}wherep′,q′{\displaystyle p',q'}areHölder conjugatetop,q,{\displaystyle p,q,}that is,1/p+1/p′=1{\displaystyle 1/p+1/p'=1}and1/q+1/q′=1.{\displaystyle 1/q+1/q'=1.} SupposeH{\displaystyle H}is a real or complexHilbert space. IfA:H→H{\displaystyle A:H\to H}is a bounded linear operator, then we have‖A‖op=‖A∗‖op{\displaystyle \|A\|_{\text{op}}=\left\|A^{*}\right\|_{\text{op}}}and‖A∗A‖op=‖A‖op2,{\displaystyle \left\|A^{*}A\right\|_{\text{op}}=\|A\|_{\text{op}}^{2},}whereA∗{\displaystyle A^{*}}denotes theadjoint operatorofA{\displaystyle A}(which inEuclidean spaceswith the standardinner productcorresponds to theconjugate transposeof the matrixA{\displaystyle A}). In general, thespectral radiusofA{\displaystyle A}is bounded above by the operator norm ofA{\displaystyle A}:ρ(A)≤‖A‖op.{\displaystyle \rho (A)\leq \|A\|_{\text{op}}.} To see why equality may not always hold, consider theJordan canonical formof a matrix in the finite-dimensional case. Because there are non-zero entries on the superdiagonal, equality may be violated. Thequasinilpotent operatorsis one class of such examples. A nonzero quasinilpotent operatorA{\displaystyle A}has spectrum{0}.{\displaystyle \{0\}.}Soρ(A)=0{\displaystyle \rho (A)=0}while‖A‖op>0.{\displaystyle \|A\|_{\text{op}}>0.} However, when a matrixN{\displaystyle N}isnormal, itsJordan canonical formis diagonal (up to unitary equivalence); this is thespectral theorem. In that case it is easy to see thatρ(N)=‖N‖op.{\displaystyle \rho (N)=\|N\|_{\text{op}}.} This formula can sometimes be used to compute the operator norm of a given bounded operatorA{\displaystyle A}: define theHermitian operatorB=A∗A,{\displaystyle B=A^{*}A,}determine its spectral radius, and take thesquare rootto obtain the operator norm ofA.{\displaystyle A.} The space of bounded operators onH,{\displaystyle H,}with thetopologyinduced by operator norm, is notseparable. For example, consider theLp spaceL2[0,1],{\displaystyle L^{2}[0,1],}which is a Hilbert space. For0<t≤1,{\displaystyle 0<t\leq 1,}letΩt{\displaystyle \Omega _{t}}be thecharacteristic functionof[0,t],{\displaystyle [0,t],}andPt{\displaystyle P_{t}}be themultiplication operatorgiven byΩt,{\displaystyle \Omega _{t},}that is,Pt(f)=f⋅Ωt.{\displaystyle P_{t}(f)=f\cdot \Omega _{t}.} Then eachPt{\displaystyle P_{t}}is a bounded operator with operator norm 1 and‖Pt−Ps‖op=1for allt≠s.{\displaystyle \left\|P_{t}-P_{s}\right\|_{\text{op}}=1\quad {\mbox{ for all }}\quad t\neq s.} But{Pt:0<t≤1}{\displaystyle \{P_{t}:0<t\leq 1\}}is anuncountable set. This implies the space of bounded operators onL2([0,1]){\displaystyle L^{2}([0,1])}is not separable, in operator norm. One can compare this with the fact that the sequence spaceℓ∞{\displaystyle \ell ^{\infty }}is not separable. Theassociative algebraof all bounded operators on a Hilbert space, together with the operator norm and the adjoint operation, yields aC*-algebra.
https://en.wikipedia.org/wiki/Operator_norm
Keystroke logging, often referred to askeyloggingorkeyboard capturing, is the action of recording (logging) the keys struck on akeyboard,[1][2]typically covertly, so that a person using the keyboard is unaware that their actions are being monitored. Data can then be retrieved by the person operating the logging program. Akeystroke recorderorkeyloggercan be eithersoftwareorhardware. While the programs themselves are legal,[3]with many designed to allow employers to oversee the use of their computers, keyloggers are most often used for stealing passwords and otherconfidential information.[4][5]Keystroke logging can also be utilized to monitor activities of children in schools or at home and by law enforcement officials to investigate malicious usage.[6] Keylogging can also be used to studykeystroke dynamics[7]orhuman-computer interaction. Numerous keylogging methods exist, ranging from hardware andsoftware-based approaches toacoustic cryptanalysis. In the mid-1970s, theSoviet Uniondeveloped and deployed a hardware keylogger targetingUS Embassytypewriters. Termed the "selectric bug", it transmitted the typed characters onIBM Selectrictypewriters via magnetic detection of the mechanisms causing rotation of the print head.[8]An early keylogger was written byPerry Kivolowitzand posted to theUsenet newsgroupnet.unix-wizards, net.sources on November 17, 1983.[9]The posting seems to be a motivating factor in restricting access to/dev/kmemonUnixsystems. Theuser-modeprogram operated by locating and dumping character lists (clients) as they were assembled in the Unix kernel. In the 1970s, spies installed keystroke loggers in theUS Embassyand Consulate buildings inMoscow.[10][11]They installed the bugs inSelectricII and Selectric III electric typewriters.[12] Soviet embassies used manual typewriters, rather than electric typewriters, forclassified information—apparently because they are immune to such bugs.[12]As of 2013, Russian special services still use typewriters.[11][13][14] A software-based keylogger is a computer program designed to record any input from the keyboard.[15]Keyloggers are used inITorganizations to troubleshoot technical problems with computers and business networks. Families and businesspeople use keyloggers legally to monitor network usage without their users' direct knowledge.Microsoftpublicly stated thatWindows 10has a built-in keylogger in its final version "to improve typing and writing services".[16]However, malicious individuals can use keyloggers on public computers to steal passwords or credit card information. Most keyloggers are not stopped byHTTPSencryption because that only protectsdata in transitbetween computers; software-based keyloggers run on the affected user's computer, reading keyboard inputs directly as the user types. From a technical perspective, there are several categories: Since 2006, keystroke logging has been an established research method for the study of writing processes.[22][23]Different programs have been developed to collect online process data of writing activities,[24]includingInputlog, Scriptlog, Translog and GGXLog. Keystroke logging is used legitimately as a suitable research instrument in several writing contexts. These include studies on cognitive writing processes, which include Keystroke logging can be used to research writing, specifically. It can also be integrated into educational domains for second language learning, programming skills, and typing skills. Software keyloggers may be augmented with features that capture user information without relying on keyboard key presses as the sole input. Some of these features include: Hardware-based keyloggers do not depend upon any software being installed as they exist at a hardware level in a computer system. Writing simple software applications for keylogging can be trivial, and like any nefarious computer program, can be distributed as atrojan horseor as part of avirus. What is not trivial for an attacker, however, is installing a covert keystroke logger without getting caught and downloading data that has been logged without being traced. An attacker that manually connects to a host machine to download logged keystrokes risks being traced. A trojan that sends keylogged data to a fixed e-mail address orIP addressrisks exposing the attacker. Researchers Adam Young and Moti Yung discussed several methods of sending keystroke logging. They presented a deniable password snatching attack in which the keystroke logging trojan is installed using a virus orworm. An attacker who is caught with the virus or worm can claim to be a victim. Thecryptotrojanasymmetrically encrypts the pilfered login/password pairs using thepublic keyof the trojan author and covertly broadcasts the resultingciphertext. They mentioned that the ciphertext can besteganographicallyencoded and posted to a public bulletin board such asUsenet.[45][46] In 2000, theFBIused FlashCrest iSpy to obtain thePGPpassphraseofNicodemo Scarfo, Jr., son of mob bossNicodemo Scarfo.[47]Also in 2000, the FBI lured two suspected Russiancybercriminalsto the US in an elaborate ruse, and captured their usernames and passwords with a keylogger that was covertly installed on a machine that they used to access their computers inRussia. The FBI then used these credentials to gain access to the suspects' computers in Russia to obtain evidence to prosecute them.[48] The effectiveness of countermeasures varies because keyloggers use a variety of techniques to capture data and the countermeasure needs to be effective against the particular data capture technique. In the case of Windows 10 keylogging by Microsoft, changing certain privacy settings may disable it.[49]An on-screen keyboard will be effective against hardware keyloggers; transparency[clarification needed]will defeat some—but not all—screen loggers. Ananti-spywareapplication that can only disable hook-based keyloggers will be ineffective against kernel-based keyloggers. Keylogger program authors may be able to update their program's code to adapt to countermeasures that have proven effective against it. Ananti-keyloggeris a piece ofsoftwarespecifically designed to detect keyloggers on a computer, typically comparing all files in the computer against a database of keyloggers, looking for similarities which might indicate the presence of a hidden keylogger. As anti-keyloggers have been designed specifically to detect keyloggers, they have the potential to be more effective than conventional antivirus software; some antivirus software do not consider keyloggers to be malware, as under some circumstances a keylogger can be considered a legitimate piece of software.[50] Rebooting the computer using aLive CDor write-protectedLive USBis a possible countermeasure against software keyloggers if the CD is clean of malware and the operating system contained on it is secured and fully patched so that it cannot be infected as soon as it is started. Booting a different operating system does not impact the use of a hardware or BIOS based keylogger. Manyanti-spywareapplications can detect some software based keyloggers and quarantine, disable, or remove them. However, because many keylogging programs are legitimate pieces of software under some circumstances, anti-spyware often neglects to label keylogging programs as spyware or a virus. These applications can detect software-based keyloggers based on patterns inexecutable code,heuristicsand keylogger behaviors (such as the use ofhooksand certainAPIs). No software-based anti-spyware application can be 100% effective against all keyloggers.[51]Software-based anti-spyware cannot defeat non-software keyloggers (for example, hardware keyloggers attached to keyboards will always receive keystrokes before any software-based anti-spyware application). The particular technique that the anti-spyware application uses will influence its potential effectiveness against software keyloggers. As a general rule, anti-spyware applications withhigher privilegeswill defeat keyloggers with lower privileges. For example, a hook-based anti-spyware application cannot defeat a kernel-based keylogger (as the keylogger will receive the keystroke messages before the anti-spyware application), but it could potentially defeat hook- and API-based keyloggers. Network monitors(also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with their typed information. Automatic form-filling programs may prevent keylogging by removing the requirement for a user to type personal details and passwords using the keyboard.Form fillersare primarily designed forWeb browsersto fill in checkout pages and log users into their accounts. Once the user's account andcredit cardinformation has been entered into the program, it will be automatically entered into forms without ever using the keyboard orclipboard, thereby reducing the possibility that private data is being recorded. However, someone with physical access to the machine may still be able to install software that can intercept this information elsewhere in the operating system or while in transit on the network. (Transport Layer Security(TLS) reduces the risk that data in transit may be intercepted bynetwork sniffersandproxy tools.) Usingone-time passwordsmay prevent unauthorized access to an account which has had its login details exposed to an attacker via a keylogger, as each password is invalidated as soon as it is used. This solution may be useful for someone using a public computer. However, an attacker who has remote control over such a computer can simply wait for the victim to enter their credentials before performing unauthorized transactions on their behalf while their session is active. Another common way to protect access codes from being stolen by keystroke loggers is by asking users to provide a few randomly selected characters from their authentication code. For example, they might be asked to enter the 2nd, 5th, and 8th characters. Even if someone is watching the user or using a keystroke logger, they would only get a few characters from the code without knowing their positions.[52] Use ofsmart cardsor othersecurity tokensmay improve security againstreplay attacksin the face of a successful keylogging attack, as accessing protected information would require both the (hardware) security token as well as the appropriate password/passphrase. Knowing the keystrokes, mouse actions, display, clipboard, etc. used on one computer will not subsequently help an attacker gain access to the protected resource. Some security tokens work as a type of hardware-assisted one-time password system, and others implement a cryptographicchallenge–response authentication, which can improve security in a manner conceptually similar to one time passwords.Smartcard readersand their associated keypads forPINentry may be vulnerable to keystroke logging through a so-calledsupply chain attack[53]where an attacker substitutes the card reader/PIN entry hardware for one which records the user's PIN. Most on-screen keyboards (such as the on-screen keyboard that comes withWindows XP) send normal keyboard event messages to the external target program to type text. Software key loggers can log these typed characters sent from one program to another.[54] Keystroke interference software is also available.[55]These programs attempt to trick keyloggers by introducing random keystrokes, although this simply results in the keylogger recording more information than it needs to. An attacker has the task of extracting the keystrokes of interest—the security of this mechanism, specifically how well it stands up tocryptanalysis, is unclear. Similar to on-screen keyboards,speech-to-text conversionsoftware can also be used against keyloggers, since there are no typing or mouse movements involved. The weakest point of using voice-recognition software may be how the software sends the recognized text to target software after the user's speech has been processed. ManyPDAsand latelytablet PCscan already convert pen (also called stylus) movements on theirtouchscreensto computer understandable text successfully.Mouse gesturesuse this principle by using mouse movements instead of a stylus. Mouse gesture programs convert these strokes to user-definable actions, such as typing text. Similarly,graphics tabletsandlight penscan be used to input these gestures, however, these are becoming less common.[timeframe?] The same potential weakness of speech recognition applies to this technique as well. With the help of many programs, a seemingly meaningless text can be expanded to a meaningful text and most of the time context-sensitively, e.g. "en.wikipedia.org" can be expanded when a web browser window has the focus. The biggest weakness of this technique is that these programs send their keystrokes directly to the target program. However, this can be overcome by usingthe 'alternating' technique described below, i.e. sending mouse clicks to non-responsive areas of the target program, sending meaningless keys, sending another mouse click to the target area (e.g. password field) and switching back-and-forth. Alternating between typing the login credentials and typing characters somewhere else in the focus window[56]can cause a keylogger to record more information than it needs to, but this could be easily filtered out by an attacker. Similarly, a user can move their cursor using the mouse while typing, causing the logged keystrokes to be in the wrong order e.g., by typing a password beginning with the last letter and then using the mouse to move the cursor for each subsequent letter. Lastly, someone can also usecontext menusto remove,cut, copy, and pasteparts of the typed text without using the keyboard. An attacker who can capture only parts of a password will have a largerkey spaceto attack if they choose to execute abrute-force attack. Another very similar technique uses the fact that any selected text portion is replaced by the next key typed. e.g., if the password is "secret", one could type "s", then some dummy keys "asdf". These dummy characters could then be selected with the mouse, and the next character from the password "e" typed, which replaces the dummy characters "asdf". These techniques assume incorrectly that keystroke logging software cannot directly monitor the clipboard, the selected text in a form, or take a screenshot every time a keystroke or mouse click occurs. They may, however, be effective against some hardware keyloggers. Media related toKeystroke loggingat Wikimedia Commons
https://en.wikipedia.org/wiki/Keystroke_logging
TheCauchy distribution, named afterAugustin-Louis Cauchy, is acontinuous probability distribution. It is also known, especially amongphysicists, as theLorentz distribution(afterHendrik Lorentz),Cauchy–Lorentz distribution,Lorentz(ian) function, orBreit–Wigner distribution. The Cauchy distributionf(x;x0,γ){\displaystyle f(x;x_{0},\gamma )}is the distribution of thex-intercept of a ray issuing from(x0,γ){\displaystyle (x_{0},\gamma )}with a uniformly distributed angle. It is also the distribution of theratioof two independentnormally distributedrandom variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both itsexpected valueand itsvarianceare undefined (but see§ Momentsbelow). The Cauchy distribution does not have finitemomentsof order greater than or equal to one; only fractional absolute moments exist.[1]The Cauchy distribution has nomoment generating function. Inmathematics, it is closely related to thePoisson kernel, which is thefundamental solutionfor theLaplace equationin theupper half-plane. It is one of the fewstable distributionswith a probability density function that can be expressed analytically, the others being thenormal distributionand theLévy distribution. Here are the most important constructions. If one stands in front of a line and kicks a ball with at a uniformly distributed random angle towards the line, then the distribution of the point where the ball hits the line is a Cauchy distribution. For example, consider a point at(x0,γ){\displaystyle (x_{0},\gamma )}in the x-y plane, and select a line passing through the point, with its direction (angle with thex{\displaystyle x}-axis) chosen uniformly (between −180° and 0°) at random. The intersection of the line with the x-axis follows a Cauchy distribution with locationx0{\displaystyle x_{0}}and scaleγ{\displaystyle \gamma }. This definition gives a simple way to sample from the standard Cauchy distribution. Letu{\displaystyle u}be a sample from a uniform distribution from[0,1]{\displaystyle [0,1]}, then we can generate a sample,x{\displaystyle x}from the standard Cauchy distribution using x=tan⁡(π(u−12)){\displaystyle x=\tan \left(\pi (u-{\tfrac {1}{2}})\right)}WhenU{\displaystyle U}andV{\displaystyle V}are two independentnormally distributedrandom variableswithexpected value0 andvariance1, then the ratioU/V{\displaystyle U/V}has the standard Cauchy distribution. More generally, if(U,V){\displaystyle (U,V)}is a rotationally symmetric distribution on the plane, then the ratioU/V{\displaystyle U/V}has the standard Cauchy distribution. The Cauchy distribution is the probability distribution with the followingprobability density function(PDF)[1][2]f(x;x0,γ)=1πγ[1+(x−x0γ)2]=1π[γ(x−x0)2+γ2],{\displaystyle f(x;x_{0},\gamma )={\frac {1}{\pi \gamma \left[1+\left({\frac {x-x_{0}}{\gamma }}\right)^{2}\right]}}={1 \over \pi }\left[{\gamma \over (x-x_{0})^{2}+\gamma ^{2}}\right],} wherex0{\displaystyle x_{0}}is thelocation parameter, specifying the location of the peak of the distribution, andγ{\displaystyle \gamma }is thescale parameterwhich specifies the half-width at half-maximum (HWHM), alternatively2γ{\displaystyle 2\gamma }isfull width at half maximum(FWHM).γ{\displaystyle \gamma }is also equal to half theinterquartile rangeand is sometimes called theprobable error. This function is also known as aLorentzian function,[3]and an example of anascent delta function, and therefore approaches aDirac delta functionin the limit asγ→0{\displaystyle \gamma \to 0}.Augustin-Louis Cauchyexploited such a density function in 1827 with aninfinitesimalscale parameter, defining thisDirac delta function. The maximum value or amplitude of the Cauchy PDF is1πγ{\displaystyle {\frac {1}{\pi \gamma }}}, located atx=x0{\displaystyle x=x_{0}}. It is sometimes convenient to express the PDF in terms of the complex parameterψ=x0+iγ{\displaystyle \psi =x_{0}+i\gamma } f(x;ψ)=1πIm(1x−ψ)=1πRe(−ix−ψ){\displaystyle f(x;\psi )={\frac {1}{\pi }}\,{\textrm {Im}}\left({\frac {1}{x-\psi }}\right)={\frac {1}{\pi }}\,{\textrm {Re}}\left({\frac {-i}{x-\psi }}\right)} The special case whenx0=0{\displaystyle x_{0}=0}andγ=1{\displaystyle \gamma =1}is called thestandard Cauchy distributionwith the probability density function[4][5]f(x;0,1)=1π(1+x2).{\displaystyle f(x;0,1)={\frac {1}{\pi \left(1+x^{2}\right)}}.} In physics, a three-parameter Lorentzian function is often used:f(x;x0,γ,I)=I[1+(x−x0γ)2]=I[γ2(x−x0)2+γ2],{\displaystyle f(x;x_{0},\gamma ,I)={\frac {I}{\left[1+{\left({\frac {x-x_{0}}{\gamma }}\right)}^{2}\right]}}=I\left[{\frac {\gamma ^{2}}{{\left(x-x_{0}\right)}^{2}+\gamma ^{2}}}\right],}whereI{\displaystyle I}is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case whereI=1πγ.{\displaystyle I={\frac {1}{\pi \gamma }}.\!} The Cauchy distribution is the probability distribution with the followingcumulative distribution function(CDF):F(x;x0,γ)=1πarctan⁡(x−x0γ)+12{\displaystyle F(x;x_{0},\gamma )={\frac {1}{\pi }}\arctan \left({\frac {x-x_{0}}{\gamma }}\right)+{\frac {1}{2}}} and thequantile function(inversecdf) of the Cauchy distribution isQ(p;x0,γ)=x0+γtan⁡[π(p−12)].{\displaystyle Q(p;x_{0},\gamma )=x_{0}+\gamma \,\tan \left[\pi \left(p-{\tfrac {1}{2}}\right)\right].}It follows that the first and third quartiles are(x0−γ,x0+γ){\displaystyle (x_{0}-\gamma ,x_{0}+\gamma )}, and hence theinterquartile rangeis2γ{\displaystyle 2\gamma }. For the standard distribution, the cumulative distribution function simplifies toarctangent functionarctan⁡(x){\displaystyle \arctan(x)}:F(x;0,1)=1πarctan⁡(x)+12{\displaystyle F(x;0,1)={\frac {1}{\pi }}\arctan \left(x\right)+{\frac {1}{2}}} The standard Cauchy distribution is theStudent'st-distributionwith one degree of freedom, and so it may be constructed by any method that constructs the Student's t-distribution.[6] IfΣ{\displaystyle \Sigma }is ap×p{\displaystyle p\times p}positive-semidefinite covariance matrix with strictly positive diagonal entries, then forindependent and identically distributedX,Y∼N(0,Σ){\displaystyle X,Y\sim N(0,\Sigma )}and any randomp{\displaystyle p}-vectorw{\displaystyle w}independent ofX{\displaystyle X}andY{\displaystyle Y}such thatw1+⋯+wp=1{\displaystyle w_{1}+\cdots +w_{p}=1}andwi≥0,i=1,…,p,{\displaystyle w_{i}\geq 0,i=1,\ldots ,p,}(defining acategorical distribution) it holds that[7]∑j=1pwjXjYj∼Cauchy(0,1).{\displaystyle \sum _{j=1}^{p}w_{j}{\frac {X_{j}}{Y_{j}}}\sim \mathrm {Cauchy} (0,1).} The Cauchy distribution is an example of a distribution which has nomean,varianceor highermomentsdefined. Itsmodeandmedianare well defined and are both equal tox0{\displaystyle x_{0}}. The Cauchy distribution is aninfinitely divisible probability distribution. It is also a strictlystabledistribution.[8] Like all stable distributions, thelocation-scale familyto which the Cauchy distribution belongs is closed underlinear transformationswithrealcoefficients. In addition, the family of Cauchy-distributed random variables is closed underlinear fractional transformationswith real coefficients.[9]In this connection, see alsoMcCullagh's parametrization of the Cauchy distributions. IfX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}are anIIDsample from the standard Cauchy distribution, then theirsample meanX¯=1n∑iXi{\textstyle {\bar {X}}={\frac {1}{n}}\sum _{i}X_{i}}is also standard Cauchy distributed. In particular, the average does not converge to the mean, and so the standard Cauchy distribution does not follow the law of large numbers. This can be proved by repeated integration with the PDF, or more conveniently, by using thecharacteristic functionof the standard Cauchy distribution (see below):φX(t)=E⁡[eiXt]=e−|t|.{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=e^{-|t|}.}With this, we haveφ∑iXi(t)=e−n|t|{\displaystyle \varphi _{\sum _{i}X_{i}}(t)=e^{-n|t|}}, and soX¯{\displaystyle {\bar {X}}}has a standard Cauchy distribution. More generally, ifX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}are independent and Cauchy distributed with location parametersx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}and scalesγ1,…,γn{\displaystyle \gamma _{1},\ldots ,\gamma _{n}}, anda1,…,an{\displaystyle a_{1},\ldots ,a_{n}}are real numbers, then∑iaiXi{\textstyle \sum _{i}a_{i}X_{i}}is Cauchy distributed with location∑iaixi{\textstyle \sum _{i}a_{i}x_{i}}and scale∑i|ai|γi{\textstyle \sum _{i}|a_{i}|\gamma _{i}}. We see that there is no law of large numbers for any weighted sum of independent Cauchy distributions. This shows that the condition of finite variance in thecentral limit theoremcannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of allstable distributions, of which the Cauchy distribution is a special case. IfX1,X2,…{\displaystyle X_{1},X_{2},\ldots }are an IID sample with PDFρ{\displaystyle \rho }such thatlimc→∞1c∫−ccx2ρ(x)dx=2γπ{\textstyle \lim _{c\to \infty }{\frac {1}{c}}\int _{-c}^{c}x^{2}\rho (x)\,dx={\frac {2\gamma }{\pi }}}is finite, but nonzero, then1n∑i=1nXi{\textstyle {\frac {1}{n}}\sum _{i=1}^{n}X_{i}}converges in distribution to a Cauchy distribution with scaleγ{\displaystyle \gamma }.[10] LetX{\displaystyle X}denote a Cauchy distributed random variable. Thecharacteristic functionof the Cauchy distribution is given by φX(t)=E⁡[eiXt]=∫−∞∞f(x;x0,γ)eixtdx=eix0t−γ|t|.{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=\int _{-\infty }^{\infty }f(x;x_{0},\gamma )e^{ixt}\,dx=e^{ix_{0}t-\gamma |t|}.} which is just theFourier transformof the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: f(x;x0,γ)=12π∫−∞∞φX(t;x0,γ)e−ixtdt{\displaystyle f(x;x_{0},\gamma )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi _{X}(t;x_{0},\gamma )e^{-ixt}\,dt\!} Thenth moment of a distribution is thenth derivative of the characteristic function evaluated att=0{\displaystyle t=0}. Observe that the characteristic function is notdifferentiableat the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. TheKullback–Leibler divergencebetween two Cauchy distributions has the following symmetric closed-form formula:[11]KL(px0,1,γ1:px0,2,γ2)=log⁡(γ1+γ2)2+(x0,1−x0,2)24γ1γ2.{\displaystyle \mathrm {KL} \left(p_{x_{0,1},\gamma _{1}}:p_{x_{0,2},\gamma _{2}}\right)=\log {\frac {{\left(\gamma _{1}+\gamma _{2}\right)}^{2}+{\left(x_{0,1}-x_{0,2}\right)}^{2}}{4\gamma _{1}\gamma _{2}}}.} Anyf-divergencebetween two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence.[12]Closed-form expression for thetotal variation,Jensen–Shannon divergence,Hellinger distance, etc. are available. The entropy of the Cauchy distribution is given by: H(γ)=−∫−∞∞f(x;x0,γ)log⁡(f(x;x0,γ))dx=log⁡(4πγ){\displaystyle {\begin{aligned}H(\gamma )&=-\int _{-\infty }^{\infty }f(x;x_{0},\gamma )\log(f(x;x_{0},\gamma ))\,dx\\[6pt]&=\log(4\pi \gamma )\end{aligned}}} The derivative of thequantile function, the quantile density function, for the Cauchy distribution is: Q′(p;γ)=γπsec2⁡[π(p−12)].{\displaystyle Q'(p;\gamma )=\gamma \pi \,\sec ^{2}\left[\pi \left(p-{\tfrac {1}{2}}\right)\right].} Thedifferential entropyof a distribution can be defined in terms of its quantile density,[13]specifically: H(γ)=∫01log(Q′(p;γ))dp=log⁡(4πγ){\displaystyle H(\gamma )=\int _{0}^{1}\log \,(Q'(p;\gamma ))\,\mathrm {d} p=\log(4\pi \gamma )} The Cauchy distribution is themaximum entropy probability distributionfor a random variateX{\displaystyle X}for which[14] E⁡[log⁡(1+(X−x0γ)2)]=log⁡4{\displaystyle \operatorname {E} \left[\log \left(1+{\left({\frac {X-x_{0}}{\gamma }}\right)}^{2}\right)\right]=\log 4} The Cauchy distribution is usually used as an illustrative counterexample in elementary probability courses, as a distribution with no well-defined (or "indefinite") moments. If we take an IID sampleX1,X2,…{\displaystyle X_{1},X_{2},\ldots }from the standard Cauchy distribution, then the sequence of their sample mean isSn=1n∑i=1nXi{\textstyle S_{n}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}}, which also has the standard Cauchy distribution. Consequently, no matter how many terms we take, the sample average does not converge. Similarly, the sample varianceVn=1n∑i=1n(Xi−Sn)2{\textstyle V_{n}={\frac {1}{n}}\sum _{i=1}^{n}{\left(X_{i}-S_{n}\right)}^{2}}also does not converge. A typical trajectory ofS1,S2,...{\displaystyle S_{1},S_{2},...}looks like long periods of slow convergence to zero, punctuated by large jumps away from zero, but never getting too far away. A typical trajectory ofV1,V2,...{\displaystyle V_{1},V_{2},...}looks similar, but the jumps accumulate faster than the decay, diverging to infinity. These two kinds of trajectories are plotted in the figure. Moments of sample lower than order 1 would converge to zero. Moments of sample higher than order 2 would diverge to infinity even faster than sample variance. If aprobability distributionhas adensity functionf(x){\displaystyle f(x)}, then the mean, if it exists, is given by We may evaluate this two-sidedimproper integralby computing the sum of two one-sided improper integrals. That is, for an arbitrary real numbera{\displaystyle a}. For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean.[15]When the mean of a probability distribution function (PDF) is undefined, no one can compute a reliable average over the experimental data points, regardless of the sample's size. Note that theCauchy principal valueof the mean of the Cauchy distribution islima→∞∫−aaxf(x)dx{\displaystyle \lim _{a\to \infty }\int _{-a}^{a}xf(x)\,dx}which is zero. On the other hand, the related integrallima→∞∫−2aaxf(x)dx{\displaystyle \lim _{a\to \infty }\int _{-2a}^{a}xf(x)\,dx}isnotzero, as can be seen by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory aboutexpected values, such as thestrong law of large numbers, fail to hold for the Cauchy distribution.[15] The absolute moments forp∈(−1,1){\displaystyle p\in (-1,1)}are defined. ForX∼Cauchy(0,γ){\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )}we haveE⁡[|X|p]=γpsec(πp/2).{\displaystyle \operatorname {E} [|X|^{p}]=\gamma ^{p}\mathrm {sec} (\pi p/2).} The Cauchy distribution does not have finite moments of any order. Some of the higherraw momentsdo exist and have a value of infinity, for example, the raw second moment: E⁡[X2]∝∫−∞∞x21+x2dx=∫−∞∞1−11+x2dx=∫−∞∞dx−∫−∞∞11+x2dx=∫−∞∞dx−π=∞.{\displaystyle {\begin{aligned}\operatorname {E} [X^{2}]&\propto \int _{-\infty }^{\infty }{\frac {x^{2}}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }1-{\frac {1}{1+x^{2}}}\,dx\\[8pt]&=\int _{-\infty }^{\infty }dx-\int _{-\infty }^{\infty }{\frac {1}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }dx-\pi =\infty .\end{aligned}}} By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to∞−∞{\displaystyle \infty -\infty }since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of thecentral momentsandstandardized momentsare undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow fromHölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. Consider thetruncated distributiondefined by restricting the standard Cauchy distribution to the interval[−10100, 10100]. Such a truncated distribution has all moments (and the central limit theorem applies fori.i.d.observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution.[16] Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed.[19]For example, if an i.i.d. sample of sizenis taken from a Cauchy distribution, one may calculate the sample mean as: x¯=1n∑i=1nxi{\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} Although the sample valuesxi{\displaystyle x_{i}}will be concentrated about the central valuex0{\displaystyle x_{0}}, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator ofx0{\displaystyle x_{0}}than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central valuex0{\displaystyle x_{0}}and the scaling parameterγ{\displaystyle \gamma }are needed. One simple method is to take the median value of the sample as an estimator ofx0{\displaystyle x_{0}}and half the sampleinterquartile rangeas an estimator ofγ{\displaystyle \gamma }. Other, more precise and robust methods have been developed.[20][21]For example, thetruncated meanof the middle 24% of the sampleorder statisticsproduces an estimate forx0{\displaystyle x_{0}}that is more efficient than using either the sample median or the full sample mean.[22][23]However, because of thefat tailsof the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.[22][23] Maximum likelihoodcan also be used to estimate the parametersx0{\displaystyle x_{0}}andγ{\displaystyle \gamma }. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima.[24]Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples.[25][26]The log-likelihood function for the Cauchy distribution for sample sizen{\displaystyle n}is: ℓ^(x1,…,xn∣x0,γ)=−nlog⁡(γπ)−∑i=1nlog⁡(1+(xi−x0γ)2){\displaystyle {\hat {\ell }}(x_{1},\dotsc ,x_{n}\mid \!x_{0},\gamma )=-n\log(\gamma \pi )-\sum _{i=1}^{n}\log \left(1+\left({\frac {x_{i}-x_{0}}{\gamma }}\right)^{2}\right)} Maximizing the log likelihood function with respect tox0{\displaystyle x_{0}}andγ{\displaystyle \gamma }by taking the first derivative produces the following system of equations: dℓdx0=∑i=1n2(xi−x0)γ2+(xi−x0)2=0{\displaystyle {\frac {d\ell }{dx_{0}}}=\sum _{i=1}^{n}{\frac {2(x_{i}-x_{0})}{\gamma ^{2}+\left(x_{i}-\!x_{0}\right)^{2}}}=0}dℓdγ=∑i=1n2(xi−x0)2γ(γ2+(xi−x0)2)−nγ=0{\displaystyle {\frac {d\ell }{d\gamma }}=\sum _{i=1}^{n}{\frac {2\left(x_{i}-x_{0}\right)^{2}}{\gamma (\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2})}}-{\frac {n}{\gamma }}=0} Note that ∑i=1n(xi−x0)2γ2+(xi−x0)2{\displaystyle \sum _{i=1}^{n}{\frac {\left(x_{i}-x_{0}\right)^{2}}{\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2}}}} is a monotone function inγ{\displaystyle \gamma }and that the solutionγ{\displaystyle \gamma }must satisfy min|xi−x0|≤γ≤max|xi−x0|.{\displaystyle \min |x_{i}-x_{0}|\leq \gamma \leq \max |x_{i}-x_{0}|.} Solving just forx0{\displaystyle x_{0}}requires solving a polynomial of degree2n−1{\displaystyle 2n-1},[24]and solving just forγ{\displaystyle \,\!\gamma }requires solving a polynomial of degree2n{\displaystyle 2n}. Therefore, whether solving for one parameter or for both parameters simultaneously, anumericalsolution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimatingx0{\displaystyle x_{0}}using the sample median is only about 81% as asymptotically efficient as estimatingx0{\displaystyle x_{0}}by maximum likelihood.[23][27]The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator ofx0{\displaystyle x_{0}}as the maximum likelihood estimate.[23]WhenNewton's methodis used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution forx0{\displaystyle x_{0}}. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variablesX∼Cauchy(0,γ){\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )}, themedian⁡(|X|)=γ{\displaystyle \operatorname {median} (|X|)=\gamma }the shape parameter. The Cauchy distribution is thestable distributionof index 1. TheLévy–Khintchine representationof such a stable distribution of parameterγ{\displaystyle \gamma }is given, forX∼Stable⁡(γ,0,0){\displaystyle X\sim \operatorname {Stable} (\gamma ,0,0)\,}by: E⁡(eixX)=exp⁡(∫R(eixy−1)Πγ(dy)){\displaystyle \operatorname {E} \left(e^{ixX}\right)=\exp \left(\int _{\mathbb {R} }(e^{ixy}-1)\Pi _{\gamma }(dy)\right)} where Πγ(dy)=(c1,γ1y1+γ1{y>0}+c2,γ1|y|1+γ1{y<0})dy{\displaystyle \Pi _{\gamma }(dy)=\left(c_{1,\gamma }{\frac {1}{y^{1+\gamma }}}1_{\left\{y>0\right\}}+c_{2,\gamma }{\frac {1}{|y|^{1+\gamma }}}1_{\left\{y<0\right\}}\right)\,dy} andc1,γ,c2,γ{\displaystyle c_{1,\gamma },c_{2,\gamma }}can be expressed explicitly.[28]In the caseγ=1{\displaystyle \gamma =1}of the Cauchy distribution, one hasc1,γ=c2,γ{\displaystyle c_{1,\gamma }=c_{2,\gamma }}. This last representation is a consequence of the formula π|x|=PV⁡∫R∖{0}(1−eixy)dyy2{\displaystyle \pi |x|=\operatorname {PV} \int _{\mathbb {R} \smallsetminus \lbrace 0\rbrace }(1-e^{ixy})\,{\frac {dy}{y^{2}}}} Arandom vectorX=(X1,…,Xk)T{\displaystyle X=(X_{1},\ldots ,X_{k})^{T}}is said to have the multivariate Cauchy distribution if every linear combination of its componentsY=a1X1+⋯+akXk{\displaystyle Y=a_{1}X_{1}+\cdots +a_{k}X_{k}}has a Cauchy distribution. That is, for any constant vectora∈Rk{\displaystyle a\in \mathbb {R} ^{k}}, the random variableY=aTX{\displaystyle Y=a^{T}X}should have a univariate Cauchy distribution.[29]The characteristic function of a multivariate Cauchy distribution is given by: φX(t)=eix0(t)−γ(t),{\displaystyle \varphi _{X}(t)=e^{ix_{0}(t)-\gamma (t)},\!} wherex0(t){\displaystyle x_{0}(t)}andγ(t){\displaystyle \gamma (t)}are real functions withx0(t){\displaystyle x_{0}(t)}ahomogeneous functionof degree one andγ(t){\displaystyle \gamma (t)}a positive homogeneous function of degree one.[29]More formally:[29] x0(at)=ax0(t),γ(at)=|a|γ(t),{\displaystyle {\begin{aligned}x_{0}(at)&=ax_{0}(t),\\\gamma (at)&=|a|\gamma (t),\end{aligned}}} for allt{\displaystyle t}. An example of a bivariate Cauchy distribution can be given by:[30]f(x,y;x0,y0,γ)=12πγ((x−x0)2+(y−y0)2+γ2)3/2.{\displaystyle f(x,y;x_{0},y_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left(x-x_{0}\right)}^{2}+{\left(y-y_{0}\right)}^{2}+\gamma ^{2}\right)}^{3/2}}}.}Note that in this example, even though the covariance betweenx{\displaystyle x}andy{\displaystyle y}is 0,x{\displaystyle x}andy{\displaystyle y}are notstatistically independent.[30] We also can write this formula for complex variable. Then the probability density function of complex Cauchy is : f(z;z0,γ)=12πγ(|z−z0|2+γ2)3/2.{\displaystyle f(z;z_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left|z-z_{0}\right|}^{2}+\gamma ^{2}\right)}^{3/2}}}.} Like how the standard Cauchy distribution is the Student t-distribution with one degree of freedom, the multidimensional Cauchy density is themultivariate Student distributionwith one degree of freedom. The density of ak{\displaystyle k}dimension Student distribution with one degree of freedom is: f(x;μ,Σ,k)=Γ(1+k2)Γ(12)πk2|Σ|12[1+(x−μ)TΣ−1(x−μ)]1+k2.{\displaystyle f(\mathbf {x} ;{\boldsymbol {\mu }},\mathbf {\Sigma } ,k)={\frac {\Gamma {\left({\frac {1+k}{2}}\right)}}{\Gamma ({\frac {1}{2}})\pi ^{\frac {k}{2}}\left|\mathbf {\Sigma } \right|^{\frac {1}{2}}\left[1+({\mathbf {x} }-{\boldsymbol {\mu }})^{\mathsf {T}}{\mathbf {\Sigma } }^{-1}({\mathbf {x} }-{\boldsymbol {\mu }})\right]^{\frac {1+k}{2}}}}.} The properties of multidimensional Cauchy distribution are then special cases of the multivariate Student distribution. Innuclearandparticle physics, the energy profile of aresonanceis described by therelativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.[citation needed] A function with the form of the density function of the Cauchy distribution was studied geometrically byFermatin 1659, and later was known as thewitch of Agnesi, afterMaria Gaetana Agnesiincluded it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematicianPoissonin 1824, with Cauchy only becoming associated with it during an academic controversy in 1853.[36]Poisson noted that if the mean of observations following such a distribution were taken, thestandard deviationdid not converge to any finite number. As such,Laplace's use of thecentral limit theoremwith such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast toBienaymé, who was to engage Cauchy in a long dispute over the matter.
https://en.wikipedia.org/wiki/Cauchy_distribution
The Password Gameis a 2023puzzlebrowser gamedeveloped by Neal Agarwal, where the player creates apasswordthat follows increasingly unusual and complicated rules. Based on Agarwal's experience withpassword policies,[1]the game was developed in two months, releasing on June 27, 2023. The gamewent viraland was recognized in the media for the gameplay's absurdity and commentary on the user experience of generating a password. It has been played over 10 million times. The Password Gameis aweb-basedpuzzle video game.[2]The player is tasked with typing apasswordin an input box.[3]The game has a total of 35 rules that the password must follow and which appear in a specific order.[4]As the player changes the password to comply with the first rule, a second one appears, and so on.[2][5]For each additional rule, the player must follow all the previous ones to progress, which can cause conflict.[5][6]When all 35 rules are fulfilled, the player is able to confirm it as the final password and then must retype the password to complete the game.[4] Although the initial requirements include setting a minimum of characters or including numbers, uppercase letters, or special characters,[1][7]the rules gradually become more unusual and complex.[3][6]These can involve managing having Roman numerals in the string to multiply,[6][8]adding the name of a country that players have to guess from randomGoogle Street Viewimagery (as a reference toGeoGuessr),[6][9][10]inserting the day'sWordleanswer,[8]typing the best move in a generatedchessposition usingalgebraic notation,[6][11]inserting the URL of aYouTubevideo of a randomly generated length,[4][6][11]and adjusting boldface, italics, font types, and text sizes.[4] Other game rules involveemojisin the password. One demands inclusion of the emoji representing themoon phaseat that point in time.[12]Because of two other rules, the player is required to insert an egg emoji named Paul, and once it hatches, it is replaced by a chicken emoji. The player then must keep it fed using caterpillar emojis that must be replenished over time.[13][14]If it starves, the player overfeeds it, or the Paul emoji is deleted in any way, the game ends. Red text subsequently appears over a black background, referencing the death screen characteristic of theDark Soulsaction role-playing game series.[11][13]At some point during the game, a flame emoji will appear, spreading through the password by replacing characters, including the egg, with flames that must be removed.[15] The Password Gamewas developed by Neal Agarwal, who posts his games on his website, neal.fun.[2][16]Agarwal had conceptualized the idea of the game as a parody ofpassword policiesas they got "weirder".[3]According to Agarwal, "the final straw" that made him start to work on the game may have been when he was trying to create an account on a service and was told that his password was too long, mocking the notion of a password being "too secure".[1]Development started in late April 2023 and took two months.[3]Agarwal mentioned that implementingregular expressions("find" operations instrings) was hard, especially due to features of the game's text editor that show up as the player progresses, like making text bold or italic.[1]Some of the game's password requirements were suggested to him onTwitter. Before release, Agarwal was unsure whether winning the game was possible; he attempted it unsuccessfully multiple times.[3]The game was released on his website on June 27, 2023.[3][1] The Password Gamewent viralonline soon after release.[20]After its first day, thetweetannouncing the game was retweeted over 11,000 times, and according to the developer, the game's website received over one million visits. The tweet received multiple comments discussing numbers that people reached in the game.[3]As reported byEngadget, Twitter mentions of Agarwal were "full of people cursing him for creating" the game and people exclaiming having beaten it, to the surprise of the developer.[8]As of October 2023, the game was visited over 10 million times.[21] Many critics have contrasted the standardness and simplicity of the game's initial password rules to the absurdity of the following ones.[22]The sixteenth rule of the game, which is about finding the best chess move in a specific position, was considered the most challenging byPCGamesN[11]and made other reviewers give up the game.[3][18][23]WhileTechRadarandThe Indian ExpressdeemedThe Password Gameto be a good way to kill time,[12][18]PC Gamercalled it "the evilest will-breaking browser game to exist".[9]The game was regarded byPCGamesNas possibly "one of the most inventive experiences of the year".[11]Polygondescribed it as a "comedy set in a user interface" that incorporates many secrets behind its apparent simplicity.[3]Rock Paper Shotgundiscussed the gameplay loop of the game, finding they frequently experienced amusement, followed by effort to fulfill the rule, and feeling satisfied.[2]PCWorldfelt it emphasized the usefulness ofpassword managers,[7]whileTechRadarfound it outdated due to tools likepassword generators.[5]
https://en.wikipedia.org/wiki/The_Password_Game
Ernest Vincent Wright(1872 – October 7, 1939)[1]was an American writer known for his bookGadsby, a 50,000-word novel which (except for four unintentional instances) does not use the letterE. The biographical details of his life are unclear. A 2002 article in theVillage VoicebyEd Parksaid he might have been English by birth but was more probably American. The article said he might have served in the navy and that he has been incorrectly called a graduate of MIT. The article says that he attended a vocational high school attached to MIT in 1888 but there is no record that he graduated. Park said rumors that Wright died within hours ofGadsbybeing published are untrue.[2] In October 1930, Wright approached theEvening Independentnewspaper and proposed it sponsor a bluelipogramwriting competition, with $250 for the winner. In the letter, he boasted of the quality ofGadsby. The newspaper declined his offer.[3] A 2007 post on theBookrideblog about rare books says Wright spent five and a half months writingGadsbyon a typewriter with the "e" key tied down. According to the unsigned entry at Bookride, a warehouse holding copies ofGadsbyburned down shortly after the book was printed, destroying "most copies of the ill-fated novel". The blog post says the book was never reviewed "and only kept alive by the efforts of a few avant-garde French intellos and assorted connoisseurs of the odd, weird and zany". The book's scarcity and oddness has seen copies priced at $4,000 by book dealers.[4] Wright completed a draft ofGadsbyin 1936, during a nearly six-month stint at the National Military Home in California. He failed to find a publisher and used aself-publishing pressto bring out the book.[4] Wright previously authored three other books:The Wonderful Fairies of the Sun(1896),The Fairies That Run the World and How They Do It(1903), andThoughts and Reveries of an American Bluejacket(1918). His humorous poem, "When Father Carves the Duck", can be found in some anthologies.[5]
https://en.wikipedia.org/wiki/Ernest_Vincent_Wright
Acomputeris amachinethat can beprogrammedto automaticallycarry outsequences ofarithmeticorlogical operations(computation). Moderndigital electroniccomputers can perform generic sets of operations known asprograms, which enable computers to perform a wide range of tasks. The termcomputer systemmay refer to a nominally complete computer that includes thehardware,operating system,software, andperipheralequipment needed and used for full operation; or to a group of computers that are linked and function together, such as acomputer networkorcomputer cluster. A broad range ofindustrialandconsumer productsuse computers ascontrol systems, including simple special-purpose devices likemicrowave ovensandremote controls, and factory devices likeindustrial robots. Computers are at the core of general-purpose devices such aspersonal computersandmobile devicessuch assmartphones. Computers power theInternet, which links billions of computers and users. Early computers were meant to be used only forcalculations. Simple manual instruments like theabacushave aided people in doing calculations since ancient times. Early in theIndustrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns forlooms. More sophisticated electrical machines did specializedanalogcalculations in the early 20th century. The firstdigitalelectronic calculating machines were developed duringWorld War II, bothelectromechanicaland usingthermionic valves. The firstsemiconductortransistorsin the late 1940s were followed by thesilicon-basedMOSFET(MOS transistor) andmonolithic integrated circuitchip technologies in the late 1950s, leading to themicroprocessorand themicrocomputer revolutionin the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, withtransistor countsincreasing at a rapid pace (Moore's lawnoted that counts doubled every two years), leading to theDigital Revolutionduring the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least oneprocessing element, typically acentral processing unit(CPU) in the form of amicroprocessor, together with some type ofcomputer memory, typicallysemiconductor memorychips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to storedinformation. Peripheral devices include input devices (keyboards,mice,joysticks, etc.), output devices (monitors,printers, etc.), andinput/output devicesthat perform both functions (e.g.touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. It was not until the mid-20th century that the word acquired its modern definition; according to theOxford English Dictionary, the first known use of the wordcomputerwas in a different sense, in a 1613 book calledThe Yong Mans Gleaningsby the English writerRichard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to ahuman computer, a person who carried out calculations orcomputations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts.[1]By 1943, most human computers were women.[2] TheOnline Etymology Dictionarygives the first attested use ofcomputerin the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". TheOnline Etymology Dictionarystates that the use of the term to mean"'calculating machine' (of any type) is from 1897." TheOnline Etymology Dictionaryindicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, asTuring machine".[3]The name has remained, although modern computers are capable of many higher-level functions. Devices have been used to aid computation for thousands of years, mostly usingone-to-one correspondencewithfingers. The earliest counting device was most likely a form oftally stick. Later record keeping aids throughout theFertile Crescentincluded calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a][4]The use ofcounting rodsis one example. Theabacuswas initially used for arithmetic tasks. TheRoman abacuswas developed from devices used inBabyloniaas early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval Europeancounting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[5] TheAntikythera mechanismis believed to be the earliest known mechanicalanalog computer, according toDerek J. de Solla Price.[6]It was designed to calculate astronomical positions. It was discovered in 1901 in theAntikythera wreckoff the Greek island ofAntikythera, betweenKytheraandCrete, and has been dated to approximatelyc.100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century.[7] Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. Theplanispherewas astar chartinvented byAbū Rayhān al-Bīrūnīin the early 11th century.[8]Theastrolabewas invented in theHellenistic worldin either the 1st or 2nd centuries BCE and is often attributed toHipparchus. A combination of theplanisphereanddioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems inspherical astronomy. An astrolabe incorporating a mechanicalcalendarcomputer[9][10]andgear-wheels was invented by Abi Bakr ofIsfahan,Persiain 1235.[11]Abū Rayhān al-Bīrūnī invented the first mechanical gearedlunisolar calendarastrolabe,[12]an early fixed-wiredknowledge processing machine[13]with agear trainand gear-wheels,[14]c.1000 AD. Thesector, a calculating instrument used for solving problems in proportion,trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. Theplanimeterwas a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. Theslide rulewas invented around 1620–1630, by the English clergymanWilliam Oughtred, shortly after the publication of the concept of thelogarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well astranscendental functionssuch as logarithms and exponentials, circular andhyperbolictrigonometry and otherfunctions. Slide rules with special scales are still used for quick performance of routine calculations, such as theE6Bcircular slide rule used for time and distance calculations on light aircraft. In the 1770s,Pierre Jaquet-Droz, a Swisswatchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire ofNeuchâtel,Switzerland, and still operates.[15] In 1831–1835, mathematician and engineerGiovanni Planadevised aPerpetual Calendar machine, which through a system of pulleys and cylinders could predict theperpetual calendarfor every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. Thetide-predicting machineinvented by the Scottish scientistSir William Thomsonin 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. Thedifferential analyser, a mechanical analog computer designed to solvedifferential equationsbyintegration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of theball-and-disk integrators.[16]In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. Thetorque amplifierwas the advance that allowed these machines to work. Starting in the 1920s,Vannevar Bushand others developed mechanical differential analyzers. In the 1890s, the Spanish engineerLeonardo Torres Quevedobegan to develop a series of advancedanalog machinesthat could solve real and complex roots ofpolynomials,[17][18][19][20]which were published in 1901 by theParis Academy of Sciences.[21] Charles Babbage, an English mechanical engineer andpolymath, originated the concept of a programmable computer. Considered the "father of the computer",[22]he conceptualized and invented the firstmechanical computerin the early 19th century. After working on hisdifference enginehe announced his invention in 1822, in a paper to theRoyal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables".[23]He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, ananalytical engine, was possible. The input of programs and data was to be provided to the machine viapunched cards, a method being used at the time to direct mechanicalloomssuch as theJacquard loom. For output, the machine would have aprinter, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate anarithmetic logic unit,control flowin the form ofconditional branchingandloops, and integratedmemory, making it the first design for a general-purpose computer that could be described in modern terms asTuring-complete.[24][25] The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of theBritish Governmentto cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son,Henry Babbage, completed a simplified version of the analytical engine's computing unit (themill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his workEssays on Automaticspublished in 1914,Leonardo Torres Quevedowrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas likeax(y−z)2{\displaystyle a^{x}(y-z)^{2}}, for a sequence of sets of values. The whole machine was to be controlled by aread-onlyprogram, which was complete with provisions forconditional branching. He also introduced the idea offloating-point arithmetic.[26][27][28]In 1920, to celebrate the 100th anniversary of the invention of thearithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through akeyboard, and computed and printed the results,[29][30][31][32]demonstrating the feasibility of an electromechanical analytical engine.[33] During the first half of the 20th century, many scientificcomputingneeds were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis forcomputation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[34]The first modern analog computer was atide-predicting machine, invented bySir William Thomson(later to become Lord Kelvin) in 1872. Thedifferential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 byJames Thomson, the elder brother of the more famous Sir William Thomson.[16] The art of mechanical analog computing reached its zenith with thedifferential analyzer, completed in 1931 byVannevar BushatMIT.[35]By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937master's thesislaid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers.[36][37] By 1938, theUnited States Navyhad developed theTorpedo Data Computer, an electromechanical analog computer forsubmarinesthat used trigonometry to solve the problem of firing a torpedo at a moving target. DuringWorld War II, similar devices were developed in other countries.[38] Early digital computers wereelectromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally usingvacuum tubes. TheZ2, created by German engineerKonrad Zusein 1939 inBerlin, was one of the earliest examples of an electromechanical relay computer.[39] In 1941, Zuse followed his earlier machine up with theZ3, the world's first working electromechanicalprogrammable, fully automatic digital computer.[42][43]The Z3 was built with 2000relays, implementing a 22bitword lengththat operated at aclock frequencyof about 5–10Hz.[44]Program code was supplied on punchedfilmwhile data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such asfloating-point numbers. Rather than the harder-to-implement decimal system (used inCharles Babbage's earlier design), using abinarysystem meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[45]The Z3 was not itself a universal computer but could be extended to beTuring complete.[46][47] Zuse's next computer, theZ4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to theETH Zurich.[48]The computer was manufactured by Zuse's own company,Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin.[48]The Z4 served as the inspiration for the construction of theERMETH, the first Swiss computer and one of the first in Europe.[49] Purelyelectronic circuitelements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineerTommy Flowers, working at thePost Office Research Stationin London in the 1930s, began to explore the possible use of electronics for thetelephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of thetelephone exchangenetwork into an electronic data processing system, using thousands ofvacuum tubes.[34]In the US,John Vincent AtanasoffandClifford E. BerryofIowa State Universitydeveloped and tested theAtanasoff–Berry Computer(ABC) in 1942,[50]the first "automatic electronic digital computer".[51]This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[52] During World War II, the British code-breakers atBletchley Parkachieved a number of successes at breaking encrypted German military communications. The German encryption machine,Enigma, was first attacked with the help of the electro-mechanicalbombeswhich were often run by women.[53][54]To crack the more sophisticated GermanLorenz SZ 40/42machine, used for high-level Army communications,Max Newmanand his colleagues commissioned Flowers to build theColossus.[52]He spent eleven months from early February 1943 designing and building the first Colossus.[55]After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[56]and attacked its first message on 5 February.[52] Colossus was the world's firstelectronic digitalprogrammable computer.[34]It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety ofboolean logicaloperations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.[57][58] TheENIAC[59](Electronic Numerical Integrator and Computer) was the first electronicprogrammablecomputer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by thestatesof its patch cables and switches, a far cry from thestored programelectronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".[60][61] It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction ofJohn MauchlyandJ. Presper Eckertat the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[62] The principle of the modern computer was proposed byAlan Turingin his seminal 1936 paper,[63]On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as auniversal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is thestored program, where all the instructions for computing are stored in memory.Von Neumannacknowledged that the central concept of the modern computer was due to this paper.[64]Turing machines are to this day a central object of study intheory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to beTuring-complete, which is to say, they havealgorithmexecution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[52]With the proposal of the stored-program computer this changed. A stored-program computer includes by design aninstruction setand can store in memory a set of instructions (aprogram) that details thecomputation. The theoretical basis for the stored-program computer was laid out byAlan Turingin his 1936 paper. In 1945, Turing joined theNational Physical Laboratoryand began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at theUniversity of Pennsylvaniaalso circulated hisFirst Draft of a Report on the EDVACin 1945.[34] TheManchester Babywas the world's firststored-program computer. It was built at theUniversity of Manchesterin England byFrederic C. Williams,Tom KilburnandGeoff Tootill, and ran its first program on 21 June 1948.[65]It was designed as atestbedfor theWilliams tube, the firstrandom-accessdigital storage device.[66]Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer.[67]As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, theManchester Mark 1. The Mark 1 in turn quickly became the prototype for theFerranti Mark 1, the world's first commercially available general-purpose computer.[68]Built byFerranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them toShelllabs inAmsterdam.[69]In October 1947 the directors of British catering companyJ. Lyons & Companydecided to take an active role in promoting the commercial development of computers. Lyons'sLEO Icomputer, modelled closely on the CambridgeEDSACof 1949, became operational in April 1951[70]and ran the world's first routine office computerjob. The concept of afield-effect transistorwas proposed byJulius Edgar Lilienfeldin 1925.John BardeenandWalter Brattain, while working underWilliam ShockleyatBell Labs, built the first workingtransistor, thepoint-contact transistor, in 1947, which was followed by Shockley'sbipolar junction transistorin 1948.[71][72]From 1955 onwards, transistors replacedvacuum tubesin computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat.Junction transistorswere much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on amass-productionbasis, which limited them to a number of specialized applications.[73] At theUniversity of Manchester, a team under the leadership ofTom Kilburndesigned and built a machine using the newly developed transistors instead of valves.[74]Their firsttransistorized computerand the first in the world, wasoperational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magneticdrum memory, so it was not the first completely transistorized computer. That distinction goes to theHarwell CADETof 1955,[75]built by the electronics division of theAtomic Energy Research EstablishmentatHarwell.[75][76] Themetal–oxide–silicon field-effect transistor(MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960[77][78][79][80][81][82]and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses.[73]With itshigh scalability,[83]and much lower power consumption and higher density than bipolar junction transistors,[84]the MOSFET made it possible to buildhigh-density integrated circuits.[85][86]In addition to data processing, it also enabled the practical use of MOS transistors asmemory cellstorage elements, leading to the development of MOSsemiconductor memory, which replaced earliermagnetic-core memoryin computers. The MOSFET led to themicrocomputer revolution,[87]and became the driving force behind thecomputer revolution.[88][89]The MOSFET is the most widely used transistor in computers,[90][91]and is the fundamental building block ofdigital electronics.[92] The next great advance in computing power came with the advent of theintegrated circuit(IC). The idea of the integrated circuit was first conceived by a radar scientist working for theRoyal Radar Establishmentof theMinistry of Defence,Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components inWashington, D.C., on 7 May 1952.[93] The first working ICs were invented byJack KilbyatTexas InstrumentsandRobert NoyceatFairchild Semiconductor.[94]Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[95]In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated".[96][97]However, Kilby's invention was ahybrid integrated circuit(hybrid IC), rather than amonolithic integrated circuit(IC) chip.[98]Kilby's IC had external wire connections, which made it difficult to mass-produce.[99] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[100]Noyce's invention was the first true monolithic IC chip.[101][99]His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made ofsilicon, whereas Kilby's chip was made ofgermanium. Noyce's monolithic IC wasfabricatedusing theplanar process, developed by his colleagueJean Hoerniin early 1959. In turn, the planar process was based onCarl Froschand Lincoln Derick work on semiconductor surface passivation by silicon dioxide.[102][103][104][105][106][107] Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built fromMOSFETs(MOS transistors).[108]The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein atRCAin 1962.[109]General Microelectronicslater introduced the first commercial MOS IC in 1964,[110]developed by Robert Norman.[109]Following the development of theself-aligned gate(silicon-gate) MOS transistor by Robert Kerwin,Donald Kleinand John Sarace at Bell Labs in 1967, the firstsilicon-gateMOS IC withself-aligned gateswas developed byFederico Fagginat Fairchild Semiconductor in 1968.[111]The MOSFET has since become the most critical device component in modern ICs.[108] The development of the MOS integrated circuit led to the invention of themicroprocessor,[112][113]and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was theIntel 4004,[114]designed and realized by Federico Faggin with his silicon-gate MOS IC technology,[112]along withTed Hoff,Masatoshi ShimaandStanley MazoratIntel.[b][116]In the early 1970s, MOS IC technology enabled theintegrationof more than 10,000 transistors on a single chip.[86] System on a Chip(SoCs) are complete computers on amicrochip(or chip) the size of a coin.[117]They may or may not have integratedRAMandflash memory. If not integrated, the RAM is usually placed directly above (known asPackage on package) or below (on the opposite side of thecircuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The firstmobile computerswere heavy and ran from mains power. The 50 lb (23 kg)IBM 5100was an early example. Later portables such as theOsborne 1andCompaq Portablewere considerably lighter but still needed to be plugged in. The first laptops, such as theGrid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s.[118]The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. Thesesmartphonesandtabletsrun on a variety of operating systems and recently became the dominant computing device on the market.[119]These are powered bySystem on a Chip(SoCs), which are complete computers on a microchip the size of a coin.[117] Computers can be classified in a number of different ways, including: A computer does not need to beelectronic, nor even have aprocessor, norRAM, nor even ahard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c]a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information."[124]According to this definition, any device thatprocesses informationqualifies as a computer. The termhardwarecovers all of those parts of a computer that are tangible physical objects.Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: thearithmetic logic unit(ALU), thecontrol unit, thememory, and theinput and output devices(collectively termed I/O). These parts are interconnected bybuses, often made of groups ofwires. Inside each of these parts are thousands to trillions of smallelectrical circuitswhich can be turned off or on by means of anelectronic switch. Each circuit represents abit(binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged inlogic gatesso that one or more of the circuits may control the state of one or more of the other circuits. When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are: The means through which computer gives output are known as output devices. Some examples of output devices are: Thecontrol unit(often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e]Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is theprogram counter, a special memory cell (aregister) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples ofcontrol flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called amicrosequencer, which runs amicrocodeprogram that causes all of these events to happen. The control unit, ALU, and registers are collectively known as acentral processing unit(CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a singleMOS integrated circuitchip called amicroprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic.[125]The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division,trigonometryfunctions such as sine, cosine, etc., andsquare roots. Some can operate only on whole numbers (integers) while others usefloating pointto representreal numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and returnBoolean truth values(true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involveBoolean logic:AND,OR,XOR, andNOT. These can be useful for creating complicatedconditional statementsand processingBoolean logic. Superscalarcomputers may contain multiple ALUs, allowing them to process several instructions simultaneously.[126]Graphics processorsand computers withSIMDandMIMDfeatures often contain ALUs that can perform arithmetic onvectorsandmatrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, eachmemory cellis set up to storebinary numbersin groups of eight bits (called abyte). Each byte is able to represent 256 different numbers (28= 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored intwo's complementnotation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells calledregistersthat can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called theBIOSthat orchestrates loading the computer'soperating systemfrom the hard disk drive into RAM whenever the computer is turned on or reset. Inembedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often calledfirmware, because it is notionally more like hardware than software.Flash memoryblurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAMcache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world.[128]Devices that provide input or output to the computer are calledperipherals.[129]On a typical personal computer, peripherals include input devices like the keyboard andmouse, and output devices such as thedisplayandprinter.Hard disk drives,floppy diskdrives andoptical disc drivesserve as both input and output devices.Computer networkingis another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. Agraphics processing unitmight contain fifty or more tiny computers that perform the calculations necessary to display3D graphics.[citation needed]Moderndesktop computerscontain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn.[130]One means by which this is done is with a special signal called aninterrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[131] Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until theeventit is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such assupercomputers,mainframe computersandservers. Multiprocessor andmulti-core(multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h]They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scalesimulation,graphics rendering, andcryptographyapplications, as well as with other so-called "embarrassingly parallel" tasks. Softwarerefers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physicalhardwarefrom which the system is built. Computer software includes computer programs,librariesand related non-executabledata, such asonline documentationordigital media. It is often divided intosystem softwareandapplication software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as withBIOSROMin anIBM PC compatiblecomputer, it is sometimes called "firmware". There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications. The defining feature of modern computers which distinguishes them from all other machines is that they can beprogrammed. That is to say that some type ofinstructions(theprogram) can be given to the computer, and it will process them. Modern computers based on thevon Neumann architectureoften have machine code in the form of animperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs forword processorsandweb browsersfor example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams ofprogrammersyears to write, and due to the complexity of the task almost certainly contain errors. This section applies to most commonRAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer'smemoryand are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (orbranches). Furthermore, jump instructions may be made to happenconditionallyso that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly supportsubroutinesby providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called theflow of controlwithin the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocketcalculatorcan perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in theMIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored asmachine codewith each instruction being given a unique number (its operation code oropcodefor short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture.[133][134]In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called theHarvard architectureafter theHarvard Mark Icomputer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as inCPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i]it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – amnemonicsuch as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer'sassembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Programming languages provide various ways of specifying programs for computers to run. Unlikenatural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated intomachine codeby acompileror anassemblerbefore being run, or translated directly at run time by aninterpreter. Sometimes programs are executed by a hybrid method of the two techniques. Machine languages and the assembly languages that represent them (collectively termedlow-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, anARM architectureCPU (such as may be found in asmartphoneor ahand-held videogame) cannot understand the machine language of anx86CPU that might be in aPC.[j]Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstracthigh-level programming languagesthat are able to express the needs of theprogrammermore conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called acompiler.[k]High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and variousvideo game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable.[135]As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered.[136]Large programs involving thousands of line of code and more require formal software methodologies.[137]The task of developing largesoftwaresystems presents a significant intellectual challenge.[138]Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult;[139]the academic and professional discipline of software engineering concentrates specifically on this challenge.[140] Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such asmouseclicks or keystrokes, to completely fail, or tocrash.[141]Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing anexploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l]AdmiralGrace Hopper, an American computer scientist and developer of the firstcompiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in theHarvard Mark IIcomputer in September 1947.[142] Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military'sSAGEsystem was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such asSabre.[143] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (nowDARPA), and thecomputer networkthat resulted was called theARPANET.[144]The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of computers. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s, computer networking become almost ubiquitous, due to the spread of applications like e-mail and theWorld Wide Web, combined with the development of cheap, fast networking technologies likeEthernetandADSL. The number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. There is active research to make unconventional computers out of many promising new types of technology, such asoptical computers,DNA computers,neural computers, andquantum computers. Most computers are universal, and are able to calculate anycomputable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (byquantum factoring) very quickly. There are many types ofcomputer architectures: Of all theseabstract machines, a quantum computer holds the most promise for revolutionizing computing.[145]Logic gatesare a common abstraction which can apply to most of the abovedigitaloranalogparadigms. The ability to store and execute lists of instructions calledprogramsmakes computers extremely versatile, distinguishing them fromcalculators. TheChurch–Turing thesisis a mathematical statement of this versatility: any computer with aminimum capability (being Turing-complete)is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook,supercomputer,cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century,artificial intelligencesystems were predominantlysymbolic: they executed code that was explicitly programmed by software developers.[146]Machine learningmodels, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular ofneural networks) has rapidly improved with progress in hardware forparallel computing, mainlygraphics processing units(GPUs).[147]Somelarge language modelsare able to control computers or robots.[148][149]AI progress may lead to the creation ofartificial general intelligence(AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans.[150] As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
https://en.wikipedia.org/wiki/Computer
Intelecommunicationsandcomputer engineering, thequeuing delayis the time a job waits in aqueueuntil it can be executed. It is a key component ofnetwork delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address.[1] This term is most often used in reference torouters. Whenpacketsarrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in aburst transmission) the router puts them into the queue (also called thebuffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay.[2] As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced.[3]This formula can be used when no packets are dropped from the queue. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router that receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets. When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimalnetwork congestiondelays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instead of cars). This result is both hard to model mathematically and quite counterintuitive to people who lack experience with mathematics or real networks. Failing to drop packets, choosing instead to buffer an ever-increasing number of them, producesbufferbloat. InKendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system. Kendall's notation should be used to calculate the queuing delay when packets are dropped from the queue. The M/M/1/K queuing model is the most basic and important queuing model for network analysis.[4] This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 2022-01-22.(in support ofMIL-STD-188).
https://en.wikipedia.org/wiki/Queueing_delay
Incomputing,chainingis a technique used in computer architecture in whichscalarandvectorregisters generateinterimresults which can be used immediately, without additional memory references which reduce computational speed.[1] The chaining technique was first used bySeymour Crayin the 80 MHzCray 1 supercomputerin 1976.[2] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Chaining_(vector_processing)
Innumerical analysis,Richardson extrapolationis asequence accelerationmethod used to improve therate of convergenceof asequenceof estimates of some valueA∗=limh→0A(h){\displaystyle A^{\ast }=\lim _{h\to 0}A(h)}. In essence, given the value ofA(h){\displaystyle A(h)}for several values ofh{\displaystyle h}, we can estimateA∗{\displaystyle A^{\ast }}by extrapolating the estimates toh=0{\displaystyle h=0}. It is named afterLewis Fry Richardson, who introduced the technique in the early 20th century,[1][2]though the idea was already known toChristiaan Huygensinhis calculationofπ{\displaystyle \pi }.[3]In the words ofBirkhoffandRota, "its usefulness for practical computations can hardly be overestimated."[4] Practical applications of Richardson extrapolation includeRomberg integration, which applies Richardson extrapolation to thetrapezoid rule, and theBulirsch–Stoer algorithmfor solving ordinary differential equations. LetA0(h){\displaystyle A_{0}(h)}be an approximation ofA∗{\displaystyle A^{*}}(exact value) that depends on a step sizeh(where0<h<1{\textstyle 0<h<1}) with anerrorformula of the formA∗=A0(h)+a0hk0+a1hk1+a2hk2+⋯{\displaystyle A^{*}=A_{0}(h)+a_{0}h^{k_{0}}+a_{1}h^{k_{1}}+a_{2}h^{k_{2}}+\cdots }where theai{\displaystyle a_{i}}are unknown constants and theki{\displaystyle k_{i}}are known constants such thathki>hki+1{\displaystyle h^{k_{i}}>h^{k_{i+1}}}. Furthermore,O(hki){\displaystyle O(h^{k_{i}})}represents thetruncation errorof theAi(h){\displaystyle A_{i}(h)}approximation such thatA∗=Ai(h)+O(hki).{\displaystyle A^{*}=A_{i}(h)+O(h^{k_{i}}).}Similarly, inA∗=Ai(h)+O(hki),{\displaystyle A^{*}=A_{i}(h)+O(h^{k_{i}}),}the approximationAi(h){\displaystyle A_{i}(h)}is said to be anO(hki){\displaystyle O(h^{k_{i}})}approximation. Note that by simplifying withBig O notation, the following formulae are equivalent:A∗=A0(h)+a0hk0+a1hk1+a2hk2+⋯A∗=A0(h)+a0hk0+O(hk1)A∗=A0(h)+O(hk0){\displaystyle {\begin{aligned}A^{*}&=A_{0}(h)+a_{0}h^{k_{0}}+a_{1}h^{k_{1}}+a_{2}h^{k_{2}}+\cdots \\A^{*}&=A_{0}(h)+a_{0}h^{k_{0}}+O(h^{k_{1}})\\A^{*}&=A_{0}(h)+O(h^{k_{0}})\end{aligned}}} Richardson extrapolation is a process that finds a better approximation ofA∗{\displaystyle A^{*}}by changing the error formula fromA∗=A0(h)+O(hk0){\displaystyle A^{*}=A_{0}(h)+O(h^{k_{0}})}toA∗=A1(h)+O(hk1).{\displaystyle A^{*}=A_{1}(h)+O(h^{k_{1}}).}Therefore, by replacingA0(h){\displaystyle A_{0}(h)}withA1(h){\displaystyle A_{1}(h)}thetruncation errorhas reduced fromO(hk0){\displaystyle O(h^{k_{0}})}toO(hk1){\displaystyle O(h^{k_{1}})}for the same step sizeh{\displaystyle h}. The general pattern occurs in whichAi(h){\displaystyle A_{i}(h)}is a more accurate estimate thanAj(h){\displaystyle A_{j}(h)}wheni>j{\displaystyle i>j}. By this process, we have achieved a better approximation ofA∗{\displaystyle A^{*}}by subtracting the largest term in the error which wasO(hk0){\displaystyle O(h^{k_{0}})}. This process can be repeated to remove more error terms to get even better approximations. Using the step sizesh{\displaystyle h}andh/t{\displaystyle h/t}for some constantt{\displaystyle t}, the two formulas forA∗{\displaystyle A^{*}}are: To improve our approximation fromO(hk0){\displaystyle O(h^{k_{0}})}toO(hk1){\displaystyle O(h^{k_{1}})}by removing the first error term, we multiplyequation 2bytk0{\displaystyle t^{k_{0}}}and subtractequation 1to give us(tk0−1)A∗=[tk0A0(ht)−A0(h)]+(tk0a1(ht)k1−a1hk1)+(tk0a2(ht)k2−a2hk2)+O(hk3).{\displaystyle (t^{k_{0}}-1)A^{*}={\bigg [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\bigg ]}+{\bigg (}t^{k_{0}}a_{1}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{1}}-a_{1}h^{k_{1}}{\bigg )}+{\bigg (}t^{k_{0}}a_{2}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{2}}-a_{2}h^{k_{2}}{\bigg )}+O(h^{k_{3}}).}This multiplication and subtraction was performed because[tk0A0(ht)−A0(h)]{\textstyle {\big [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\big ]}}is anO(hk1){\displaystyle O(h^{k_{1}})}approximation of(tk0−1)A∗{\displaystyle (t^{k_{0}}-1)A^{*}}. We can solve our current formula forA∗{\displaystyle A^{*}}to giveA∗=[tk0A0(ht)−A0(h)]tk0−1+(tk0a1(ht)k1−a1hk1)tk0−1+(tk0a2(ht)k2−a2hk2)tk0−1+O(hk3){\displaystyle A^{*}={\frac {{\bigg [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\bigg ]}}{t^{k_{0}}-1}}+{\frac {{\bigg (}t^{k_{0}}a_{1}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{1}}-a_{1}h^{k_{1}}{\bigg )}}{t^{k_{0}}-1}}+{\frac {{\bigg (}t^{k_{0}}a_{2}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{2}}-a_{2}h^{k_{2}}{\bigg )}}{t^{k_{0}}-1}}+O(h^{k_{3}})}which can be written asA∗=A1(h)+O(hk1){\displaystyle A^{*}=A_{1}(h)+O(h^{k_{1}})}by settingA1(h)=tk0A0(ht)−A0(h)tk0−1.{\displaystyle A_{1}(h)={\frac {t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h)}{t^{k_{0}}-1}}.} A generalrecurrence relationcan be defined for the approximations byAi+1(h)=tkiAi(ht)−Ai(h)tki−1{\displaystyle A_{i+1}(h)={\frac {t^{k_{i}}A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{i}}-1}}}whereki+1{\displaystyle k_{i+1}}satisfiesA∗=Ai+1(h)+O(hki+1).{\displaystyle A^{*}=A_{i+1}(h)+O(h^{k_{i+1}}).} The Richardson extrapolation can be considered as a linearsequence transformation. Additionally, the general formula can be used to estimatek0{\displaystyle k_{0}}(leading order step size behavior ofTruncation error) when neither its value norA∗{\displaystyle A^{*}}is knowna priori. Such a technique can be useful for quantifying an unknownrate of convergence. Given approximations ofA∗{\displaystyle A^{*}}from three distinct step sizesh{\displaystyle h},h/t{\displaystyle h/t}, andh/s{\displaystyle h/s}, the exact relationshipA∗=tk0Ai(ht)−Ai(h)tk0−1+O(hk1)=sk0Ai(hs)−Ai(h)sk0−1+O(hk1){\displaystyle A^{*}={\frac {t^{k_{0}}A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{0}}-1}}+O(h^{k_{1}})={\frac {s^{k_{0}}A_{i}\left({\frac {h}{s}}\right)-A_{i}(h)}{s^{k_{0}}-1}}+O(h^{k_{1}})}yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the twoOterms is only approximately valid)Ai(ht)+Ai(ht)−Ai(h)tk0−1≈Ai(hs)+Ai(hs)−Ai(h)sk0−1{\displaystyle A_{i}\left({\frac {h}{t}}\right)+{\frac {A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{0}}-1}}\approx A_{i}\left({\frac {h}{s}}\right)+{\frac {A_{i}\left({\frac {h}{s}}\right)-A_{i}(h)}{s^{k_{0}}-1}}}which can be solved numerically to estimatek0{\displaystyle k_{0}}for some arbitrary valid choices ofh{\displaystyle h},s{\displaystyle s}, andt{\displaystyle t}. Ast≠1{\displaystyle t\neq 1}, ift>0{\displaystyle t>0}ands{\displaystyle s}is chosen so thats=t2{\displaystyle s=t^{2}}, this approximate relation reduces to a quadratic equation intk0{\displaystyle t^{k_{0}}}, which is readily solved fork0{\displaystyle k_{0}}in terms ofh{\displaystyle h}andt{\displaystyle t}. Suppose that we wish to approximateA∗{\displaystyle A^{*}}, and we have a methodA(h){\displaystyle A(h)}that depends on a small parameterh{\displaystyle h}in such a way thatA(h)=A∗+Chn+O(hn+1).{\displaystyle A(h)=A^{\ast }+Ch^{n}+O(h^{n+1}).} Let us define a new functionR(h,t):=tnA(h/t)−A(h)tn−1{\displaystyle R(h,t):={\frac {t^{n}A(h/t)-A(h)}{t^{n}-1}}}whereh{\displaystyle h}andht{\displaystyle {\frac {h}{t}}}are two distinct step sizes. ThenR(h,t)=tn(A∗+C(ht)n+O(hn+1))−(A∗+Chn+O(hn+1))tn−1=A∗+O(hn+1).{\displaystyle R(h,t)={\frac {t^{n}(A^{*}+C\left({\frac {h}{t}}\right)^{n}+O(h^{n+1}))-(A^{*}+Ch^{n}+O(h^{n+1}))}{t^{n}-1}}=A^{*}+O(h^{n+1}).}R(h,t){\displaystyle R(h,t)}is called the RichardsonextrapolationofA(h), and has a higher-order error estimateO(hn+1){\displaystyle O(h^{n+1})}compared toA(h){\displaystyle A(h)}. Very often, it is much easier to obtain a given precision by usingR(h) rather thanA(h′) with a much smallerh′. WhereA(h′) can cause problems due to limited precision (rounding errors) and/or due to the increasingnumber of calculationsneeded (see examples below). The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODEy′(t)=−y2{\displaystyle y'(t)=-y^{2}},y(0)=1{\displaystyle y(0)=1}with theTrapezoidal method. In this example we halve the step sizeh{\displaystyle h}each iteration and so in the discussion above we'd have thatt=2{\displaystyle t=2}. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raiset{\displaystyle t}to the second power and to take powers of4=22=t2{\displaystyle 4=2^{2}=t^{2}}in the pseudocode. We want to find the value ofy(5){\displaystyle y(5)}, which has the exact solution of15+1=16=0.1666...{\displaystyle {\frac {1}{5+1}}={\frac {1}{6}}=0.1666...}since the exact solution of the ODE isy(t)=11+t{\displaystyle y(t)={\frac {1}{1+t}}}. This pseudocode assumes that a function calledTrapezoidal(f, tStart, tEnd, h, y0)exists which attempts to computey(tEnd)by performing the trapezoidal method on the functionf, with starting pointy0andtStartand step sizeh. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance.
https://en.wikipedia.org/wiki/Richardson_extrapolation
WebMis an audiovisual media file format.[5]It is primarily intended to offer aroyalty-freealternative to use in theHTML videoand theHTML audioelements. It has a sister project,WebP, for images. The development of the format is sponsored byGoogle, and the corresponding software is distributed under aBSD license. The WebMcontaineris based on aprofileofMatroska.[3][6][7]WebM initially supportedVP8video andVorbisaudio streams. In 2013, it was updated to accommodateVP9video andOpusaudio.[8]It also supports theAV1codec.[9] Native WebM support byMozilla Firefox,[10][11]Opera,[12][13]andGoogle Chrome[14]was announced at the 2010Google I/Oconference.Internet Explorer 9requires third-party WebM software.[15]In 2021,ApplereleasedSafari14.1 for macOS, which added native WebM support to the browser.[16]As of 2019[update], QuickTime does not natively support WebM,[17][18]but does with a suitable third-party plug-in.[19]In 2011, the Google WebM Project Team released plugins for Internet Explorer and Safari to allow playback of WebM files through the standard HTML5<video>tag.[20]As of 9 June 2012[update], Internet Explorer 9 and later supported the plugin for Windows Vista and later.[21] VLC media player,[22]MPlayer,K-Multimedia PlayerandJRiver Media Centerhave native support for playing WebM files.[23]FFmpegcan encode and decode VP8 videos when built with support forlibvpx, the VP8/VP9 codec library of the WebM project, as well asmux/demuxWebM-compliant files.[24]On July 23, 2010 Fiona Glaser, Ronald Bultje, and David Conrad of the FFmpeg team announced the ffvp8 decoder. Their testing found that ffvp8 was faster than Google's own libvpx decoder.[25][26]MKVToolNix, the popularMatroskacreation tools, implemented support for multiplexing/demultiplexing WebM-compliant files out of the box.[27]Haali Media Splitter also announced support for muxing/demuxing of WebM.[27]Since version 1.4.9, theLiVESvideo editor has support for realtime decoding and for encoding to WebM format using ffmpeg libraries. MPC-HCsince build SVN 2071 supports WebM playback with internal VP8 decoder based onFFmpeg's code.[25][28]The full decoding support for WebM is available in MPC-HC since version 1.4.2499.0.[29] Androidis WebM-enabled since version2.3 Gingerbread,[30]which was first made available via theNexus Ssmartphone and streamable since Android4.0 Ice Cream Sandwich.[31] The Microsoft Edge browser supports WebM since April 2016.[32] On July 30, 2019,Blender 2.80was released with WebM support.[33] iOSdid not natively play WebM originally,[34]but support for WebM was added in Safari 15 as part ofiOS 15.[35] The SonyPlayStation 5supports capturing 1080p and 2160p footage in WebM format.[36] ChromeOSscreen recordings are saved as WebM files.[37] WebM Project licenses VP8 hardware accelerators (RTL IP) to semiconductor companies for 1080p encoding and decoding at zero cost.[38]AMD,ARMandBroadcomhave announced support forhardware accelerationof the WebM format.[39][40]Intelis also considering hardware-based acceleration for WebM in itsAtom-basedTV chips if the format gains popularity.[41]QualcommandTexas Instrumentshave announced support,[42][43]with native support coming to the TIOMAPprocessor.[44]Chips&Mediahave announced a fully hardware decoder for VP8 that can decodefull HDresolution (1080p) VP8 streams at 60 frames per second.[45] Nvidiais supporting VP8 and provides both hardware decoding and encoding in theTegra 4andTegra 4iSoCs.[46]Nvidiaannounced3Dvideo support for WebM throughHTML5and theirNvidia 3D Visiontechnology.[47][48][49] On January 7, 2011,Rockchipreleased the world's first chip to host a full hardware implementation of 1080p VP8 decoding. The video acceleration in the RK29xx chip is handled by the WebM Project's G-Series 1 hardware decoder IP.[50] In June 2011,ZiiLABSdemonstrated their 1080p VP8 decoder implementation running on the ZMS-20 processor. The chip's programmable media processing array is used to provide the VP8 acceleration.[51] ST-EricssonandHuaweialso had hardware implementations in their computer chips.[52] The original WebM license terminated both patent grants and copyright redistribution terms if a patent infringement lawsuit was filed, causing concerns around GPL compatibility. In response to those concerns, the WebM Project decoupled the patent grant from the copyright grant, offering the code under a standardBSD licenseand patents under a separate grant.[53]TheFree Software Foundation, which maintainsThe Free Software Definition, has given its endorsement for WebM and VP8[54]and considers the software's license to be compatible with theGNU General Public License.[55][56]On January 19, 2011, the Free Software Foundation announced its official support for the WebM project.[57]In February 2011,Microsoft's Vice President of Internet Explorer called upon Google to provide indemnification against patent suits.[58] Although Google has irrevocably released all of its patents on VP8 as a royalty-free format,[59]theMPEG LA, licensors of theH.264patent pool, have expressed interest in creating apatent poolfor VP8.[60][61]Conversely, other researchers cite evidence thatOn2made a particular effort to avoid any MPEG LA patents.[62]As a result of the threat, theUnited States Department of Justice(DOJ) started an investigation in March 2011 into the MPEG LA for its role in possibly attempting to stifle competition.[63][64]In March 2013, MPEG LA announced that it had reached an agreement with Google to license patents that "may be essential" for the implementation of the VP8 codec, and give Google the right to sub-license these patents to any third-party user of VP8 orVP9.[65][66] In March 2013,Nokiafiled an objection to theInternet Engineering Task Forceconcerning Google's proposal for the VP8 codec to be a core part of WebM, saying it holds essential patents to VP8's implementation.[67]Nokia listed 64 patents and 22 pending applications, adding it was not prepared to license any of them for VP8.[68]On August 5, 2013, a court in Mannheim, Germany, ruled that VP8 does not infringe a patent owned and asserted by Nokia.[69]
https://en.wikipedia.org/wiki/WebM
Computational neuroscience(also known astheoretical neuroscienceormathematical neuroscience) is a branch ofneurosciencewhich employsmathematics,computer science, theoretical analysis and abstractions of the brain to understand the principles that govern thedevelopment,structure,physiologyandcognitive abilitiesof thenervous system.[1][2][3][4] Computational neuroscience employs computational simulations[5]to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous.[6]The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.[7] Computational neuroscience focuses on the description ofbiologicallyplausibleneurons(andneural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used inconnectionism,control theory,cybernetics,quantitative psychology,machine learning,artificial neural networks,artificial intelligenceandcomputational learning theory;[8][9][10]although mutual inspiration exists and sometimes there is no strict limit between fields,[11][12][13]with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed. Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling vianetwork oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments. The term 'computational neuroscience' was introduced byEric L. Schwartz, who organized a conference, held in 1985 inCarmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the bookComputational Neuroscience.[14]The first of the annual open international meetings focused on Computational Neuroscience was organized byJames M. Bowerand John Miller inSan Francisco, Californiain 1989.[15]The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at theCalifornia Institute of Technologyin 1985. The early historical roots of the field[16]can be traced to the work of people includingLouis Lapicque,Hodgkin&Huxley,HubelandWiesel, andDavid Marr. Lapicque introduced theintegrate and firemodel of the neuron in a seminal article published in 1907,[17]a model still popular forartificial neural networksstudies because of its simplicity (see a recent review[18]). About 40 years later,HodgkinandHuxleydeveloped thevoltage clampand created the first biophysical model of theaction potential.HubelandWieseldiscovered that neurons in theprimary visual cortex, the first cortical area to process information coming from theretina, have oriented receptive fields and are organized in columns.[19]David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within thehippocampusandneocortexinteract, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work ofWilfrid Rall, with the first multicompartmental model usingcable theory. Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena. Even a single neuron has complex biophysical characteristics and can perform computations (e.g.[20]). Hodgkin and Huxley'soriginal modelonly employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation andshunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.[21] The computational functions of complexdendritesare also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.[22] There are many software packages, such asGENESISandNEURON, that allow rapid and systematicin silicomodeling of realistic neurons.Blue Brain, a project founded byHenry Markramfrom theÉcole Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of acortical columnon theBlue Genesupercomputer. Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics.[23]However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.[24] Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify thepotassium cycle,[25][26]so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.[27] Computational neuroscience aims to address a wide array of questions, including: How doaxonsanddendritesform during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know frommolecular biologythat distinct parts of the nervous system release distinct chemical cues, fromgrowth factorstohormonesthat modulate and influence the growth and development of functional connections between neurons. Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is theminimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.[28] Early models on sensory processing understood within a theoretical framework are credited toHorace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form ofefficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.[29] Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck.[30]A subsequent theory,V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.[31] Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form ofBayesian inferenceand integration of different sensory information in generating our perception of the physical world.[32][33] Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems. Earlier models ofmemoryare primarily based on the postulates ofHebbian learning. Biologically relevant models such asHopfield nethave been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- andlong-term memory, localizing in thehippocampus. One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstablesynapsesare easy to train but also prone to stochastic disruption. Stablesynapsesforget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades. Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike mostartificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as thevisual cortex, are understood in some detail.[34]It is also unknown what the computational functions of these specific connectivity patterns are, if any. The interactions of neurons in a small network can be often reduced to simple models such as theIsing model. Thestatistical mechanicsof such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions.[35]It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence oftwo-photon microscopyandcalcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks. In some cases the complex interactions betweeninhibitoryandexcitatoryneurons can be simplified usingmean-field theory, which gives rise to thepopulation modelof neural networks.[36]While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.[37] Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli.[38]Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.[39]An example theory that is being extensively tested behaviorally and physiologically is theV1 Saliency Hypothesisthat a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously.[31]Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes. Computational modeling of higher cognitive functions has only recently[when?]begun. Experimental data comes primarily fromsingle-unit recordinginprimates. Thefrontal lobeandparietal lobefunction as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.[40] Thebrainseems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing andrecognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines. The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice.Integrative neuroscienceattempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.[41] The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making. One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life.Francis Crick,Giulio TononiandChristof Kochmade some attempts to formulate consistent frameworks for future work inneural correlates of consciousness(NCC), though much of the work in this field remains speculative.[42] Computational clinical neuroscienceis a field that brings together experts in neuroscience,neurology,psychiatry,decision sciencesand computational modeling to quantitatively define and investigate problems inneurologicalandpsychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.[43][44] Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma[45]or anesthesia.[46]For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient. Computational psychiatryis a new emerging field that brings together experts inmachine learning,neuroscience,neurology,psychiatry,psychologyto provide an understanding of psychiatric disorders.[47][48][49] A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See:neuromorphic computing,physical neural network). One of the advantages of using aphysical modelcomputer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times,[50]neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include theHuman Brain ProjectSpiNNakersupercomputer and the BrainScaleS computer.[51]
https://en.wikipedia.org/wiki/Computational_neuroscience
FRACTRANis aTuring-completeesoteric programming languageinvented by the mathematicianJohn Conway. A FRACTRAN program is anordered listof positivefractionstogether with an initial positive integer inputn. The program is run by updating the integernas follows: Conway 1987gives the following FRACTRAN program, called PRIMEGAME, which finds successiveprime numbers: (1791,7885,1951,2338,2933,7729,9523,7719,117,1113,1311,152,17,551){\displaystyle \left({\frac {17}{91}},{\frac {78}{85}},{\frac {19}{51}},{\frac {23}{38}},{\frac {29}{33}},{\frac {77}{29}},{\frac {95}{23}},{\frac {77}{19}},{\frac {1}{17}},{\frac {11}{13}},{\frac {13}{11}},{\frac {15}{2}},{\frac {1}{7}},{\frac {55}{1}}\right)} Starting withn=2, this FRACTRAN program generates the following sequence of integers: After 2, this sequence contains the following powers of 2: 22=4,23=8,25=32,27=128,211=2048,213=8192,217=131072,219=524288,…{\displaystyle 2^{2}=4,\,2^{3}=8,\,2^{5}=32,\,2^{7}=128,\,2^{11}=2048,\,2^{13}=8192,\,2^{17}=131072,\,2^{19}=524288,\,\dots }(sequenceA034785in theOEIS) The exponent part of these powers of two are primes, 2, 3, 5, etc. A FRACTRAN program can be seen as a type ofregister machinewhere the registers are stored in prime exponents in the argumentn{\displaystyle n}. UsingGödel numbering, a positive integern{\displaystyle n}can encode an arbitrary number of arbitrarily large positive integer variables.[note 1]The value of each variable is encoded as the exponent of a prime number in theprime factorizationof the integer. For example, the integer 60=22×31×51{\displaystyle 60=2^{2}\times 3^{1}\times 5^{1}} represents a register state in which one variable (which we will callv2{\displaystyle v_{2}}) holds the value 2 and two other variables (v3{\displaystyle v_{3}}andv5{\displaystyle v_{5}}) hold the value 1. All other variables hold the value 0. A FRACTRAN program is an ordered list of positive fractions. Each fraction represents an instruction that tests one or more variables, represented by the prime factors of itsdenominator. For example: f1=2120=3×722×51{\displaystyle f_{1}={\frac {21}{20}}={\frac {3\times 7}{2^{2}\times 5^{1}}}} testsv2{\displaystyle v_{2}}andv5{\displaystyle v_{5}}. Ifv2≥2{\displaystyle v_{2}\geq 2}andv5≥1{\displaystyle v_{5}\geq 1}, then it subtracts 2 fromv2{\displaystyle v_{2}}and 1 fromv5{\displaystyle v_{5}}and adds 1 to v3 and 1 tov7{\displaystyle v_{7}}. For example: 60⋅f1=22×31×51⋅3×722×51=32×71{\displaystyle 60\cdot f_{1}=2^{2}\times 3^{1}\times 5^{1}\cdot {\frac {3\times 7}{2^{2}\times 5^{1}}}=3^{2}\times 7^{1}} Since the FRACTRAN program is just a list of fractions, these test-decrement-increment instructions are the only allowed instructions in the FRACTRAN language. In addition the following restrictions apply: The simplest FRACTRAN program is a single instruction such as (32){\displaystyle \left({\frac {3}{2}}\right)} This program can be represented as a (very simple) algorithm as follows: Given an initial input of the form2a3b{\displaystyle 2^{a}3^{b}}, this program will compute the sequence2a−13b+1{\displaystyle 2^{a-1}3^{b+1}},2a−23b+2{\displaystyle 2^{a-2}3^{b+2}}, etc., until eventually, aftera{\displaystyle a}steps, no factors of 2 remain and the product with32{\displaystyle {\frac {3}{2}}}no longer yields an integer; the machine then stops with a final output of3a+b{\displaystyle 3^{a+b}}. It therefore adds two integers together. We can create a "multiplier" by "looping" through the "adder". In order to do this we need to introducestatesinto our algorithm. This algorithm will take a number2a3b{\displaystyle 2^{a}3^{b}}and produce5ab{\displaystyle 5^{ab}}: State B is a loop that addsv3{\displaystyle v_{3}}tov5{\displaystyle v_{5}}and also movesv3{\displaystyle v_{3}}tov7{\displaystyle v_{7}}, and state A is an outer control loop that repeats the loop in state Bv2{\displaystyle v_{2}}times. State A also restores the value ofv3{\displaystyle v_{3}}fromv7{\displaystyle v_{7}}after the loop in state B has completed. We can implement states using new variables as state indicators. The state indicators for state B will bev11{\displaystyle v_{11}}andv13{\displaystyle v_{13}}. Note that we require two state control indicators for one loop; a primary flag (v11{\displaystyle v_{11}}) and a secondary flag (v13{\displaystyle v_{13}}). Because each indicator is consumed whenever it is tested, we need a secondary indicator to say "continue in the current state"; this secondary indicator is swapped back to the primary indicator in the next instruction, and the loop continues. Adding FRACTRAN state indicators and instructions to the multiplication algorithm table, we have: When we write out the FRACTRAN instructions, we must put the state A instructions last, because state A has no state indicators - it is the default state if no state indicators are set. So as a FRACTRAN program, the multiplier becomes: (45533,1113,111,37,112,13){\displaystyle \left({\frac {455}{33}},{\frac {11}{13}},{\frac {1}{11}},{\frac {3}{7}},{\frac {11}{2}},{\frac {1}{3}}\right)} With input 2a3bthis program produces output 5ab.[note 2] In a similar way, we can create a FRACTRAN "subtractor", and repeated subtractions allow us to create a "quotient and remainder" algorithm as follows: Writing out the FRACTRAN program, we have: (9166,1113,133,8511,57119,1719,1117,13){\displaystyle \left({\frac {91}{66}},{\frac {11}{13}},{\frac {1}{33}},{\frac {85}{11}},{\frac {57}{119}},{\frac {17}{19}},{\frac {11}{17}},{\frac {1}{3}}\right)} and input 2n3d11 produces output 5q7rwheren=qd+rand 0 ≤r<d. Conway's prime generating algorithm above is essentially a quotient and remainder algorithm within two loops. Given input of the form2n7m{\displaystyle 2^{n}7^{m}}where 0 ≤m<n, the algorithm tries to dividen+1 by each number fromndown to 1, until it finds the largest numberkthat is a divisor ofn+1. It then returns 2n+17k-1and repeats. The only times that the sequence of state numbers generated by the algorithm produces a power of 2 is whenkis 1 (so that the exponent of 7 is 0), which only occurs if the exponent of 2 is a prime. A step-by-step explanation of Conway's algorithm can be found in Havil (2007). For this program, reaching the prime number 2, 3, 5, 7... requires respectively 19, 69, 281, 710,... steps (sequenceA007547in theOEIS). A variant of Conway's program also exists,[1]which differs from the above version by two fractions:(1791,7885,1951,2338,2933,7729,9523,7719,117,1113,1311,1514,152,551){\displaystyle \left({\frac {17}{91}},{\frac {78}{85}},{\frac {19}{51}},{\frac {23}{38}},{\frac {29}{33}},{\frac {77}{29}},{\frac {95}{23}},{\frac {77}{19}},{\frac {1}{17}},{\frac {11}{13}},{\frac {13}{11}},{\frac {15}{14}},{\frac {15}{2}},{\frac {55}{1}}\right)} This variant is a little faster: reaching 2, 3, 5, 7... takes it 19, 69, 280, 707... steps (sequenceA007546in theOEIS). A single iteration of this program, checking a particular numberNfor primeness, takes the following number of steps:N−1+(6N+2)(N−b)+2∑d=bN−1⌊Nd⌋,{\displaystyle N-1+(6N+2)(N-b)+2\sum \limits _{d=b}^{N-1}\left\lfloor {\frac {N}{d}}\right\rfloor ,}whereb<N{\displaystyle b<N}is the largest integer divisor ofNand⌊x⌋{\displaystyle \lfloor x\rfloor }is thefloor function.[2] In 1999, Devin Kilminster demonstrated a shorter, ten-instruction program:[3](73,9998,1349,3935,3691,10143,4913,711,12,911).{\displaystyle \left({\frac {7}{3}},{\frac {99}{98}},{\frac {13}{49}},{\frac {39}{35}},{\frac {36}{91}},{\frac {10}{143}},{\frac {49}{13}},{\frac {7}{11}},{\frac {1}{2}},{\frac {91}{1}}\right).}For the initial inputn = 10successive primes are generated by subsequent powers of 10. The following FRACTRAN program: (3⋅1122⋅5,511,132⋅5,15,23,2⋅57,72){\displaystyle \left({\frac {3\cdot 11}{2^{2}\cdot 5}},{\frac {5}{11}},{\frac {13}{2\cdot 5}},{\frac {1}{5}},{\frac {2}{3}},{\frac {2\cdot 5}{7}},{\frac {7}{2}}\right)} calculates theHamming weightH(a) of the binary expansion ofai.e. the number of 1s in the binary expansion ofa.[4]Given input 2a, its output is 13H(a). The program can be analysed as follows:
https://en.wikipedia.org/wiki/FRACTRAN
Inmathematics,ancient Egyptian multiplication(also known asEgyptian multiplication,Ethiopian multiplication,Russian multiplication, orpeasant multiplication), one of twomultiplicationmethods used by scribes, is a systematic method for multiplying two numbers that does not require themultiplication table, only the ability to multiply anddivide by 2, and toadd. It decomposes one of themultiplicands(preferably the smaller) into a set of numbers ofpowers of twoand then creates a table of doublings of the second multiplicand by every value of the set which is summed up to give result of multiplication. This method may be calledmediation and duplation, wheremediationmeans halving one number and duplation means doubling the other number. It is still used in some areas.[1] The second Egyptian multiplication and division technique was known from thehieraticMoscowandRhind Mathematical Papyriwritten in the seventeenth century B.C. by the scribeAhmes.[2] Although in ancient Egypt the concept ofbase 2did not exist, the algorithm is essentially the same algorithm aslong multiplicationafter the multiplier and multiplicand are converted tobinary. The method as interpreted by conversion to binary is therefore still in wide use today as implemented bybinary multiplier circuitsin modern computer processors.[1] Theancient Egyptianshad laid out tables of a great number of powers of two, rather than recalculating them each time. To decompose a number, they identified the powers of two which make it up. The Egyptians knew empirically that a given power of two would only appear once in a number. For the decomposition, they proceeded methodically; they would initially find the largest power of two less than or equal to the number in question,subtractit out and repeat until nothing remained. (The Egyptians did not make use of the numberzeroin mathematics.) After the decomposition of the first multiplicand, the person would construct a table of powers of two times the second multiplicand (generally the smaller) from one up to the largest power of two found during the decomposition. The result is obtained by adding the numbers from the second column for which the corresponding power of two makes up part of the decomposition of the first multiplicand.[1] Because mathematically speaking, multiplication of natural numbers is just "exponentiation in the additivemonoid", this multiplication method can also be recognised as a special case of theSquare and multiplyalgorithm for exponentiation. 25 × 7 = ? Decomposition of the number 25: The largest power of two is 16 and the second multiplicand is 7. As 25 = 16 + 8 + 1, the corresponding multiples of 7 are added to get 25 × 7 = 112 + 56 + 7 = 175. In the Russian peasant method, the powers of two in the decomposition of the multiplicand are found by writing it on the left and progressively halving the left column, discarding any remainder, until the value is 1 (or −1, in which case the eventual sum is negated), while doubling the right column as before. Lines withevennumbers on the left column are struck out, and the remaining numbers on the right are added together.[3] 238 × 13 = ?
https://en.wikipedia.org/wiki/Peasant_multiplication
Incondensed matter physics,Anderson localization(also known asstrong localization)[1]is the absence of diffusion of waves in adisorderedmedium. This phenomenon is named after the American physicistP. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree ofrandomness(disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor withimpuritiesordefects.[2] Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished fromweak localization, which is the precursor effect of Anderson localization (see below), and fromMott localization, named after SirNevill Mott, where the transition from metallic to insulating behaviour isnotdue to disorder, but to a strong mutualCoulomb repulsionof electrons. In the originalAnderson tight-binding model, the evolution of thewave functionψon thed-dimensional latticeZdis given by theSchrödinger equation where theHamiltonianHis given by[2] wherej,k{\displaystyle j,k}are lattice locations. The self-energyEj{\displaystyle E_{j}}is taken asrandom and independently distributed. The interaction potentialV(r)=V(|j−k|){\displaystyle V(r)=V(|j-k|)}is required to fall off faster than1/r3{\displaystyle 1/r^{3}}in ther→∞{\displaystyle r\to \infty }limit. For example, one may takeEj{\displaystyle E_{j}}uniformly distributedwithin a band of energies[−W,+W],{\displaystyle [-W,+W],}and Starting withψ0{\displaystyle \psi _{0}}localized at the origin, one is interested in how fast the probability distribution|ψ|2{\displaystyle |\psi |^{2}}diffuses. Anderson's analysis shows the following: The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in thewave interferencebetween multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium. For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahamset al.[3]This scaling hypothesis of localization suggests that a disorder-inducedmetal-insulator transition(MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandeset al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT or only an apparent MIT.[4]However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a smallspin-orbit couplingcan lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size. Most numerical approaches to the localization problem use the standard tight-binding AndersonHamiltonianwith onsite-potential disorder. Characteristics of the electroniceigenstatesare then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is thetransfer-matrix method(TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008). Recent work has shown that a non-interacting Anderson localized system can becomemany-body localizedeven in the presence of weak interactions. This result has been rigorously proven in 1D, while perturbative arguments exist even for two and three dimensions. Anderson localization can be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartzet al., 2007) and a 1D lattice (Lahiniet al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasiet al., 2012) and a biological medium (Choiet al., 2018), and has also been used to transport images through the fiber (Karbasiet al., 2014). It has also been observed by localization of aBose–Einstein condensatein a 1D disordered optical potential (Billyet al., 2008; Roatiet al., 2008). In 3D, observations are more rare. Anderson localization of elastic waves in a 3D disordered medium has been reported (Huet al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabéet al., 2008). The MIT, associated with the nonpropagative electron waves has been reported in a cm-sized crystal (Yinget al., 2016).Random laserscan operate using this phenomenon. The existence of Anderson localization for light in 3D was debated for years (Skipetrovet al., 2016) and remains unresolved today. Reports of Anderson localization of light in 3D random media were complicated by the competing/masking effects of absorption (Wiersmaet al., 1997; Storzeret al., 2006; Scheffoldet al., 1999; see Further Reading) and/or fluorescence (Sperlinget al., 2016). Recent experiments (Naraghiet al., 2016; Cobuset al., 2023) support theoretical predictions that the vector nature of light prohibits the transition to Anderson localization (John, 1992; Skipetrovet al., 2019). Standard diffusion has no localization property, being in disagreement with quantum predictions. However, it turns out that it is based on approximation of theprinciple of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. This approximation is repaired inmaximal entropy random walk, also repairing the disagreement: it turns out to lead to exactly the quantum ground state stationary probability distribution with its strong localization properties.[5][6]
https://en.wikipedia.org/wiki/Anderson_localization
Polkit(formerlyPolicyKit) is a component for controlling system-wideprivilegesinUnix-likeoperating systems. It provides an organized way for non-privileged processes to communicate with privileged ones. Polkit allows a level of control of centralized system policy. It is developed and maintained by David Zeuthen fromRed Hatand hosted by thefreedesktop.orgproject. It is published asfree softwareunder the terms of version 2 of theGNU Lesser General Public License.[3] Since version 0.105, released in April 2012,[4][5]the name of the project was changed fromPolicyKittopolkitto emphasize that the system component was rewritten[6]and that theAPIhad changed, breakingbackward compatibility.[7][dubious–discuss] Fedorabecame the firstdistributionto include PolicyKit, and it has since been used in other distributions, includingUbuntusince version 8.04 andopenSUSEsince version 10.3. Some distributions, like Fedora,[8]have already switched to the rewritten polkit. It is also possible to use polkit to execute commands with elevated privileges using the commandpkexecfollowed by the command intended to be executed (withrootpermission).[9]However, it may be preferable to usesudo, as this command provides more flexibility and security, in addition to being easier to configure.[10] Thepolkitddaemonimplements Polkit functionality.[11] Amemory corruptionvulnerability PwnKit (CVE-2021-4034[12]) discovered in thepkexeccommand (installed on all major Linux distributions) was announced on January 25, 2022.[13][14]The vulnerability dates back to the original distribution from 2009. The vulnerability received aCVSS scoreof 7.8 ("High severity") reflecting serious factors involved in a possible exploit: unprivileged users can gain full root privileges, regardless of the underlying machine architecture or whether thepolkitdaemon is running or not. Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/PolicyKit
Learning analyticsis the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.[1]The growth ofonline learningsince the 1990s, particularly inhigher education, has contributed to the advancement of Learning Analytics as student data can be captured and made available for analysis.[2][3][4]When learners use anLMS,social media, or similar online tools, their clicks, navigation patterns, time on task,social networks,information flow, and concept development through discussions can be tracked. The rapid development ofmassive open online courses(MOOCs) offers additional data for researchers to evaluate teaching and learning in online environments.[5] Although a majority of Learning Analytics literature has started to adopt the aforementioned definition, the definition and aims of Learning Analytics are still contested. One earlier definition discussed by the community suggested that Learning Analytics is the use of intelligent data, learner-produced data, and analysis models to discover information and social connections for predicting and advising people's learning.[6]But this definition has been criticised byGeorge Siemens[7][non-primary source needed]andMike Sharkey.[8][non-primary source needed] Dr. Wolfgang GrellerandDr. Hendrik Drachslerdefined learning analytics holistically as a framework. They proposed that it is a generic design framework that can act as a useful guide for setting up analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. It uses ageneral morphological analysis(GMA) to divide the domain into six "critical dimensions".[9] The broader term "Analytics" has been defined as the science of examining data to draw conclusions and, when used indecision-making, to present paths or courses of action.[10]From this perspective, Learning Analytics has been defined as a particular case ofAnalytics, in whichdecision-makingaims to improve learning and education.[11]During the 2010s, this definition of analytics has gone further to incorporate elements ofoperations researchsuch asdecision treesandstrategy mapsto establishpredictive modelsand to determine probabilities for certain courses of action.[10] Another approach for defining Learning Analytics is based on the concept ofAnalyticsinterpreted as theprocessof developing actionable insights through problem definition and the application ofstatistical modelsand analysis against existing and/or simulated future data.[12][13]From this point of view, Learning Analytics emerges as a type ofAnalytics(as aprocess), in which the data, the problem definition and the insights are learning-related. In 2016, a research jointly conducted by the New Media Consortium (NMC) and the EDUCAUSE Learning Initiative (ELI) -anEDUCAUSEProgram- describes six areas of emerging technology that will have had significant impact onhigher educationand creative expression by the end of 2020. As a result of this research, Learning analytics was defined as an educational application ofweb analyticsaimed at learner profiling, a process of gathering and analyzing details of individual student interactions inonline learningactivities.[14] In 2017,Gašević,Коvanović, andJoksimovićproposed a consolidated model of learning analytics.[15]The model posits that learning analytics is defined at the intersection of three disciplines: data science, theory, and design. Data science offers computational methods and techniques for data collection, pre-processing, analysis, and presentation. Theory is typically drawn from the literature in the learning sciences, education, psychology, sociology, and philosophy. The design dimension of the model includes: learning design, interaction design, and study design. In 2015,Gašević,Dawson, andSiemensargued that computational aspects of learning analytics need to be linked with the existing educational research in order for Learning Analytics to deliver its promise to understand and optimize learning.[16] Differentiating the fields ofeducational data mining(EDM) and learning analytics (LA) has been a concern of several researchers.George Siemenstakes the position that educational data mining encompasses both learning analytics andacademic analytics,[17]the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch defineacademic analyticsas an area that "...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior".[18]They go on to attempt to disambiguate educational data mining from academic analytics based on whether the process is hypothesis driven or not, though Brooks[19]questions whether this distinction exists in the literature. Brooks[19]instead proposes that a better distinction between the EDM and LA communities is in the roots of where each community originated, with authorship at the EDM community being dominated by researchers coming from intelligent tutoring paradigms, and learning anaytics researchers being more focused on enterprise learning systems (e.g. learning content management systems). Regardless of the differences between the LA and EDM communities, the two areas have significant overlap both in the objectives of investigators as well as in the methods and techniques that are used in the investigation. In theMSprogram offering in learning analytics atTeachers College, Columbia University, students are taught both EDM and LA methods.[20] Learning Analytics, as a field, has multiple disciplinary roots. While the fields ofartificial intelligence (AI),statistical analysis,machine learning, andbusiness intelligenceoffer an additional narrative, the main historical roots of analytics are the ones directly related tohuman interactionand theeducation system.[5]More in particular, the history of Learning Analytics is tightly linked to the development of fourSocial Sciences' fields that have converged throughout time. These fields pursued, and still do, four goals: A diversity of disciplines and research activities have influenced in these 4 aspects throughout the last decades, contributing to the gradual development of learning analytics. Some of most determinant disciplines areSocial Network Analysis,User Modelling,Cognitive modelling,Data MiningandE-Learning. The history of Learning Analytics can be understood by the rise and development of these fields.[5] Social network analysis(SNA) is the process of investigating social structures through the use ofnetworksandgraph theory.[21]It characterizes networked structures in terms ofnodes(individual actors, people, or things within the network) and theties,edges, orlinks(relationships or interactions) that connect them.[citation needed]Social network analysisis prominent inSociology, and its development has had a key role in the emergence of Learning Analytics. One of the first examples or attempts to provide a deeper understanding of interactions is by Austrian-American SociologistPaul Lazarsfeld. In 1944, Lazarsfeld made the statement of "who talks to whom about what and to what effect".[22]That statement forms what today is still the area of interest or the target within social network analysis, which tries to understand how people are connected and what insights can be derived as a result of their interactions, a core idea of Learning Analytics.[5] Citation analysis American linguistEugene Garfieldwas an early pioneer in analytics in science. In 1955, Garfield led the first attempt to analyse the structure of science regarding how developments in science can be better understood by tracking the associations (citations) between articles (how they reference one another, the importance of the resources that they include, citation frequency, etc). Through tracking citations, scientists can observe how research is disseminated and validated. This was the basic idea of what eventually became a "page rank", which in the early days ofGoogle(beginning of the 21st century) was one of the key ways of understanding the structure of a field by looking at page connections and the importance of those connections. The algorithmPageRank-the first search algorithm used by Google- was based on this principle.[23][24]Americancomputer scientistLarry Page, Google's co-founder, defined PageRank as "an approximation of the importance" of a particular resource.[25]Educationally, citation orlink analysisis important for mappingknowledge domains.[5] The essential idea behind these attempts is the realization that, as data increases, individuals, researchers or business analysts need to understand how to track the underlying patterns behind the data and how to gain insight from them. And this is also a core idea in Learning Analytics.[5] Digitalization of Social network analysis During the early 1970s,pushed by the rapid evolution in technology,Social network analysistransitioned into analysis of networks in digital settings.[5] During the first decade of the century, ProfessorCaroline Haythornthwaiteexplored the impact ofmedia typeon the development ofsocial ties, observing thathuman interactionscan be analyzed to gain novel insight not fromstrong interactions(i.e. people that are strongly related to the subject) but, rather, fromweak ties. This provides Learning Analytics with a central idea: apparently un-related data may hide crucial information. As an example of this phenomenon, an individual looking for a job will have a better chance of finding new information through weak connections rather than strong ones.[31](Siemens, George (2013-03-17).Intro to Learning Analytics. LAK13 open online course for University of Texas at Austin & Edx. 11 minutes in. Retrieved2018-11-01.) Her research also focused on the way that differenttypes of mediacan impact theformation of networks. Her work highly contributed to the development ofsocial network analysisas a field. Important ideas were inherited by Learning Analytics, such that a range of metrics and approaches can define the importance of a particular node, the value ofinformation exchange, the way that clusters are connected to one another, structural gaps that might exist within those networks, etc.[5] The application of social network analysis in digital learning settings has been pioneered by ProfessorShane P. Dawson. He has developed a number of software tools, such as Social Networks Adapting Pedagogical Practice (SNAPP) for evaluating the networks that form in [learning management systems] when students engage in forum discussions.[32] The main goal ofuser modellingis the customization andadaptation of systemsto the user's specific needs, especially in theirinteraction with computing systems. The importance of computers being able to respond individually to into people was starting to be understood in the decade of 1970s. DrElaine Richin 1979 predicted that "computers are going to treat their users as individuals with distinct personalities, goals, and so forth".[33]This is a central idea not only educationally but also in general web use activity, in whichpersonalizationis an important goal.[5] User modellinghas become important in research inhuman-computer interactionsas it helps researchers to design better systems by understanding how users interact with software.[34]Recognizing unique traits, goals, and motivations of individuals remains an important activity in learning analytics.[5] Personalization andadaptation of learningcontent is an important present and future direction oflearning sciences, and its history within education has contributed to the development of learning analytics.[5]Hypermediais a nonlinear medium of information that includes graphics, audio, video, plain text andhyperlinks. The term was first used in a 1965 article written by American SociologistTed Nelson.[35]Adaptive hypermediabuilds onuser modellingby increasing personalization of content and interaction. In particular, adaptive hypermedia systems build a model of the goals, preferences and knowledge of each user, in order to adapt to the needs of that user. From the end of the 20th century onwards, the field grew rapidly, mainly due to that theinternetboosted research into adaptivity and, secondly, the accumulation and consolidation of research experience in the field. In turn, Learning Analytics has been influenced by this strong development.[36] Education/cognitive modellinghas been applied to tracing how learners develop knowledge. Since the end of the 1980s and early 1990s, computers have been used in education as learning tools for decades. In 1989,Hugh Burnsargued for the adoption and development ofintelligent tutor systemsthat ultimately would pass three levels of "intelligence":domain knowledge, learner knowledge evaluation, andpedagogicalintervention. During the 21st century, these three levels have remained relevant for researchers and educators.[37] In the decade of 1990s, the academic activity around cognitive models focused on attempting to develop systems that possess a computational model capable of solving the problems that are given to students in the ways students are expected to solve the problems.[38]Cognitive modelling has contributed to the rise in popularity of intelligent orcognitive tutors. Once cognitive processes can be modelled, software (tutors) can be developed to support learners in the learning process. The research base on this field became, eventually, significantly relevant for learning analytics during the 21st century.[5][39][40] While big data analytics has been more and more widely applied in education, Wise and Shaffer[41]addressed the importance of theory-based approach in the analysis. Epistemic Frame Theory conceptualized the "ways of thinking, acting, and being in the world" in a collaborative learning environment. Specifically, the framework is based on the context ofCommunity of Practice(CoP), which is a group of learners, with common goals, standards and prior knowledge and skills, to solve a complex problem. Due to the essence of CoP, it is important to study the connections between elements (learners, knowledge, concepts, skills and so on). To identify the connections, the co-occurrences of elements in learners' data are identified and analyzed. Shaffer and Ruis[42]pointed out the concept of closing the interpretive loop, by emphasizing the transparency and validation of model, interpretation and the original data. The loop can be closed by a good theoretical sound analytics approaches,Epistemic Network Analysis. In a discussion of the history of analytics,Adam Cooperhighlights a number of communities from which learning analytics has drawn techniques, mainly during the first decades of the 21st century, including:[43] The first graduate program focused specifically on learning analytics was created byRyan S. Bakerand launched in the Fall 2015 semester atTeachers College,Columbia University. The program description states that "(...)data about learning and learners are being generated today on an unprecedented scale. The fields of learning analytics (LA) andeducational data mining(EDM) have emerged with the aim of transforming this data into new insights that can benefit students, teachers, and administrators. As one of world's leading teaching and research institutions in education, psychology, and health, we are proud to offer an innovative graduate curriculum dedicated to improving education through technology anddata analysis."[44] Masters programs are now offered at several other universities as well, including the University of Texas at Arlington, the University of Wisconsin, and the University of Pennsylvania. Methods for learning analytics include: Learning Applications can be and has been applied in a noticeable number of contexts. Analytics have been used for: There is a broad awareness of analytics across educational institutions for various stakeholders,[10]but that the way learning analytics is defined and implemented may vary, including:[13] Some motivations and implementations of analytics may come into conflict with others, for example highlighting potential conflict between analytics for individual learners and organisational stakeholders.[13] Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include: The ethics of data collection, analytics, reporting and accountability has been raised as a potential concern for learning analytics,[9][57][58]with concerns raised regarding: As Kay, Kom and Oppenheim point out, the range of data is wide, potentially derived from:[60] Thus the legal and ethical situation is challenging and different from country to country, raising implications for:[60] In some prominent cases like the inBloom disaster,[61]even full functional systems have been shut down due to lack of trust in the data collection by governments, stakeholders and civil rights groups. Since then, the learning analytics community has extensively studied legal conditions in a series of experts workshops on "Ethics & Privacy 4 Learning Analytics" that constitute the use of trusted learning analytics.[62][non-primary source needed]Drachsler & Greller released an 8-point checklist named DELICATE that is based on the intensive studies in this area to demystify the ethics and privacy discussions around learning analytics.[63] It shows ways to design and provide privacy conform learning analytics that can benefit all stakeholders. The full DELICATE checklist is publicly available.[64] Privacy management practices of students have shown discrepancies between one's privacy beliefs and one's privacy related actions.[65]Learning analytic systems can have default settings that allow data collection of students if they do not choose to opt-out.[65]Some online education systems such asedXorCourserado not offer a choice to opt-out of data collection.[65]In order for certain learning analytics to function properly, these systems utilize cookies to collect data.[65] In 2012, a systematic overview on learning analytics and its key concepts was provided by ProfessorMohamed Chattiand colleagues through a reference model based on four dimensions, namely: Chatti, Muslim and Schroeder[68]note that the aim of open learning analytics (OLA) is to improve learning effectiveness in lifelong learning environments. The authors refer to OLA as an ongoing analytics process that encompasses diversity at all four dimensions of the learning analytics reference model.[66] For general audience introductions, see:
https://en.wikipedia.org/wiki/Learning_analytics
End of messageorEOM(as in "(EOM)" or "<EOM>") signifies the end of a message, often ane-mailmessage.[1] Thesubject of an e-mail message may contain such an abbreviationto signify that all content is in the subject line so that the message itself does not need to be opened (e.g., "No classes Monday (EOM)" or "Midterm delayed <EOM>"). This practice can save the time of the receiver and has been recommended to increase productivity.[1][2] EOM can also be used in conjunction withno reply necessary, or NRN, to signify that the sender does not require (or would prefer not to receive) a response (e.g., "Campaign has launched (EOM/NRN)") orreply requestedor RR to signify that the sender wishes a response (e.g., "Got a minute? (EOM/RR)"). These are examples ofInternet slang. EOM is often used this way, as a synonym to NRN, inblogsand forums online. It is often a snide way for commenters to imply that their message is so perfect that there can be no logical response to it. Or it can be used as a way of telling another specific poster to stop writing back.[citation needed] EOM can also be defined as the final 3 buzzes of an alert of theEmergency Alert Systemto know when the alert is finished. In earlier communications methods, anend of message("EOM") sequence of characters indicated to a receiving device or operator that the current message has ended. Inteleprintersystems, the sequence "NNNN", on a line by itself, is an end of message indicator. In severalMorse codeconventions, includingamateur radio, theprosignAR(dit dah dit dah dit) means end of message. In the originalASCIIcode, "EOM" corresponded to code 03hex, which has since been renamed to "ETX" ("end of text").[3] This Internet-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/End_of_message
A"return-to-libc" attackis acomputer securityattack usually starting with abuffer overflowin which a subroutinereturn addresson acall stackis replaced by an address of a subroutine that is already present in theprocessexecutable memory, bypassing theno-execute bitfeature (if present) and ridding the attacker of the need toinjecttheir own code. The first example of this attack in the wild was contributed byAlexander Peslyakon theBugtraqmailing list in 1997.[1] OnPOSIX-compliantoperating systemstheC standard library("libc") is commonly used to provide a standardruntime environmentfor programs written in theC programming language. Although the attacker could make the code return anywhere,libcis the most likely target, as it is almost always linked to the program, and it provides useful calls for an attacker (such as thesystemfunction used to execute shell commands). Anon-executablestack can prevent some buffer overflow exploitation, however it cannot prevent a return-to-libc attack because in the return-to-libc attack only existing executable code is used. On the other hand, these attacks can only call preexisting functions.Stack-smashing protectioncan prevent or obstruct exploitation as it may detect the corruption of the stack and possibly flush out the compromised segment. "ASCII armoring" is a technique that can be used to obstruct this kind of attack. With ASCII armoring, all the system libraries (e.g., libc) addresses contain aNULL byte(0x00). This is commonly done by placing them in the first0x01010101bytes of memory (a few pages more than 16 MB, dubbed the "ASCII armor region"), as every address up to (but not including) this value contains at least one NULL byte. This makes it impossible to emplace code containing those addresses using string manipulation functions such asstrcpy(). However, this technique does not work if the attacker has a way to overflow NULL bytes into the stack. If the program is too large to fit in the first 16MB, protection may be incomplete.[2]This technique is similar to another attack known asreturn-to-pltwhere, instead of returning to libc, the attacker uses the Procedure Linkage Table (PLT) functions loaded in theposition-independent code(e.g.,system@plt, execve@plt, sprintf@plt, strcpy@plt).[3] Address space layout randomization(ASLR) makes this type of attack extremely unlikely to succeed on64-bit machinesas the memory locations of functions are random. For32-bit systems, however, ASLR provides little benefit since there are only 16 bits available for randomization, and they can be defeated bybrute forcein a matter of minutes.[4]
https://en.wikipedia.org/wiki/Return-to-libc_attack
Incomputing, anabstraction layerorabstraction levelis a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include theOSI modelfornetwork protocols,OpenGL, and othergraphics libraries, which allow theseparation of concernsto facilitateinteroperabilityandplatform independence. Incomputer science, an abstraction layer is a generalization of aconceptual modeloralgorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept ordesign patternso that situations, where it may be accurately applied, can be quickly recognized. Just composing lower-level elements into a construct doesn't count as an abstraction layer unless it shields users from its underlying complexity.[1] A layer is considered to be on top of another if itdependson it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions. Afamous aphorismofDavid Wheeleris, "All problems in computer science can be solved by another level ofindirection."[2]This is often deliberately misquoted with "abstraction" substituted for "indirection."[citation needed]It is also sometimes misattributed toButler Lampson.Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection."[3] In acomputer architecture, a computer system is usually represented as consisting of several abstraction levels such as: Programmable logic is often considered part of the hardware, while the logical definitions are also sometimes seen as part of a device's software or firmware. Firmware may include only low-level software, but can also include all software, including an operating system and applications. The software layers can be further divided into hardware abstraction layers, physical and logical device drivers, repositories such as filesystems, operating system kernels, middleware, applications, and others. A distinction can also be made from low-level programming languages likeVHDL,machine language,assembly languageto acompiled language,interpreter, andscript language.[4] In the Unix operating system, most types of input and output operations are considered to be streams of bytes read from a device or written to a device. This stream of bytes model is used for file I/O, socket I/O, and terminal I/O in order to provide device independence. In order to read and write to a device at the application level, the program calls a function to open the device, which may be a real device such as a terminal or a virtual device such as a network port or a file in a file system. The device's physical characteristics are mediated by the operating system which in turn presents an abstract interface that allows the programmer to read and write bytes from/to the device. The operating system then performs the actual transformation needed to read and write the stream of bytes to the device. Most graphics libraries such as OpenGL provide an abstract graphical device model as an interface. The library is responsible for translating the commands provided by the programmer into the specific device commands needed to draw the graphical elements and objects. The specific device commands for aplotterare different from the device commands for aCRT monitor, but the graphics library hides the implementation and device-dependent details by providing an abstract interface which provides a set ofprimitivesthat are generally useful for drawing graphical objects.
https://en.wikipedia.org/wiki/Abstraction_layer
In themathematicalfield ofnumerical analysis,interpolationis a type ofestimation, a method of constructing (finding) newdata pointsbased on the range of adiscrete setof known data points.[1][2] Inengineeringandscience, one often has a number of data points, obtained bysamplingorexperimentation, which represent the values of a function for a limited number of values of theindependent variable. It is often required tointerpolate; that is, estimate the value of that function for an intermediate value of the independent variable. A closely related problem is theapproximationof a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error and give better performance in calculation process. This table gives some values of an unknown functionf(x){\displaystyle f(x)}. Interpolation provides a means of estimating the function at intermediate points, such asx=2.5.{\displaystyle x=2.5.} We describe somemethodsof interpolation, differing in such properties as: accuracy, cost, number of data points needed, andsmoothnessof the resultinginterpolantfunction. The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, aslinearinterpolation (see below) is almost as easy, but in higher-dimensionalmultivariate interpolation, this could be a favourable choice for its speed and simplicity. One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimatingf(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to takef(2.5) midway betweenf(2) = 0.9093 andf(3) = 0.1411, which yields 0.5252. Generally, linear interpolation takes two data points, say (xa,ya) and (xb,yb), and the interpolant is given by: This previous equation states that the slope of the new line between(xa,ya){\displaystyle (x_{a},y_{a})}and(x,y){\displaystyle (x,y)}is the same as the slope of the line between(xa,ya){\displaystyle (x_{a},y_{a})}and(xb,yb){\displaystyle (x_{b},y_{b})} Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is notdifferentiableat the pointxk. The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate byg, and suppose thatxlies betweenxaandxband thatgis twice continuously differentiable. Then the linear interpolation error is In words, the error is proportional to the square of the distance between the data points. The error in some other methods, includingpolynomial interpolationand spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants. Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is alinear function. We now replace this interpolant with apolynomialof higherdegree. Consider again the problem given above. The following sixth degree polynomial goes through all the seven points: Substitutingx= 2.5, we find thatf(2.5) = ~0.59678. Generally, if we havendata points, there is exactly one polynomial of degree at mostn−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the powern. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation. However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (seecomputational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (seeRunge's phenomenon). Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum atx≈ 1.566,f(x) ≈ 1.003 and a local minimum atx≈ 4.708,f(x) ≈ −1.003. However, these maxima and minima may exceed the theoretical range of the function; for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains falsevertical asymptotes. More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense; that is, to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention toChebyshev polynomials. Linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline. For instance, thenatural cubic splineispiecewisecubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by In this case we getf(2.5) = 0.5972. Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation, while the interpolant is smoother and easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress.[3] Depending on the underlying discretisation of fields, different interpolants may be required. In contrast to other interpolation methods, which estimate functions on target points, mimetic interpolation evaluates the integral of fields on target lines, areas or volumes, depending on the type of field (scalar, vector, pseudo-vector or pseudo-scalar). A key feature of mimetic interpolation is thatvector calculus identitiesare satisfied, includingStokes' theoremand thedivergence theorem. As a result, mimetic interpolation conserves line, area and volume integrals.[4]Conservation of line integrals might be desirable when interpolating theelectric field, for instance, since the line integral gives theelectric potentialdifference at the endpoints of the integration path.[5]Mimetic interpolation ensures that the error of estimating the line integral of an electric field is the same as the error obtained by interpolating the potential at the end points of the integration path, regardless of the length of the integration path. Linear,bilinearandtrilinear interpolationare also considered mimetic, even if it is the field values that are conserved (not the integral of the field). Apart from linear interpolation, area weighted interpolation can be considered one of the first mimetic interpolation methods to have been developed.[6] TheTheory of Functional Connections(TFC) is a mathematical framework specifically developed forfunctional interpolation. Given any interpolant that satisfies a set of constraints, TFC derives a functional that represents the entire family of interpolants satisfying those constraints, including those that are discontinuous or partially defined. These functionals identify the subspace of functions where the solution to a constrained optimization problem resides. Consequently, TFC transforms constrained optimization problems into equivalent unconstrained formulations. This transformation has proven highly effective in the solution ofdifferential equations. TFC achieves this by constructing a constrained functional (a function of a free function), that inherently satisfies given constraints regardless of the expression of the free function. This simplifies solving various types of equations and significantly improves the efficiency and accuracy of methods likePhysics-Informed Neural Networks(PINNs). TFC offers advantages over traditional methods likeLagrange multipliersandspectral methodsby directly addressing constraints analytically and avoiding iterative procedures, although it cannot currently handle inequality constraints. Interpolation is a common way to approximate functions. Given a functionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }with a set of pointsx1,x2,…,xn∈[a,b]{\displaystyle x_{1},x_{2},\dots ,x_{n}\in [a,b]}one can form a functions:[a,b]→R{\displaystyle s:[a,b]\to \mathbb {R} }such thatf(xi)=s(xi){\displaystyle f(x_{i})=s(x_{i})}fori=1,2,…,n{\displaystyle i=1,2,\dots ,n}(that is, thats{\displaystyle s}interpolatesf{\displaystyle f}at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, iff∈C4([a,b]){\displaystyle f\in C^{4}([a,b])}(four times continuously differentiable) thencubic spline interpolationhas an error bound given by‖f−s‖∞≤C‖f(4)‖∞h4{\displaystyle \|f-s\|_{\infty }\leq C\|f^{(4)}\|_{\infty }h^{4}}wherehmaxi=1,2,…,n−1|xi+1−xi|{\displaystyle h\max _{i=1,2,\dots ,n-1}|x_{i+1}-x_{i}|}andC{\displaystyle C}is a constant.[7] Gaussian processis a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression; that is, for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known asKriging. Inverse Distance Weighting(IDW) is a spatial interpolation method that estimates values based on nearby data points, with closer points having more influence.[8]It uses an inverse power law for weighting, where higher power values emphasize local effects, while lower values create a smoother surface. IDW is widely used inGIS,meteorology, and environmental modeling for its simplicity but may produce artifacts in clustered or uneven data.[9] Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation isinterpolationbyrational functionsusingPadé approximant, andtrigonometric interpolationis interpolation bytrigonometric polynomialsusingFourier series. Another possibility is to usewavelets. TheWhittaker–Shannon interpolation formulacan be used if the number of data points is infinite or if the function to be interpolated has compact support. Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads toHermite interpolationproblems. When each data point is itself a function, it can be useful to see the interpolation problem as a partialadvectionproblem between each data point. This idea leads to thedisplacement interpolationproblem used intransportation theory. Multivariate interpolation is the interpolation of functions of more than one variable. Methods includenearest-neighbor interpolation,bilinear interpolationandbicubic interpolationin two dimensions, andtrilinear interpolationin three dimensions. They can be applied to gridded or scattered data. Mimetic interpolation generalizes ton{\displaystyle n}dimensional spaces wheren>3{\displaystyle n>3}.[10][11] In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (Upsampling) using various digital filtering techniques (for example, convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the originalNyquist limitof the signal (that is, above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's bookMultirate Digital Signal Processing.[12] The termextrapolationis used to find data points outside the range of known data points. Incurve fittingproblems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads toleast squaresapproximation. Approximation theorystudies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function. If we considerx{\displaystyle x}as a variable in atopological space, and the functionf(x){\displaystyle f(x)}mapping to aBanach space, then the problem is treated as "interpolation of operators".[13]The classical results about interpolation of operators are theRiesz–Thorin theoremand theMarcinkiewicz theorem. There are also many other subsequent results.
https://en.wikipedia.org/wiki/Interpolation
Innumber theory, theparity problemrefers to a limitation insieve theorythat prevents sieves from giving good estimates in many kinds ofprime-counting problems. The problem was identified and named byAtle Selbergin 1949. Beginning around 1996,John FriedlanderandHenryk Iwaniecdeveloped some parity-sensitive sieves that make the parity problem less of an obstacle. Terence Taogave this "rough" statement of the problem:[1] Parity problem. IfAis a set whose elements are all products of an odd number of primes (or are all products of an even number of primes), then (without injecting additional ingredients), sieve theory is unable to provide non-trivial lower bounds on the size ofA. Also, any upper bounds must be off from the truth by a factor of 2 or more. This problem is significant because it may explain why it is difficult for sieves to "detect primes," in other words to give a non-trivial lower bound for the number of primes with some property. For example, in a senseChen's theoremis very close to a solution of thetwin prime conjecture, since it says that there are infinitely many primespsuch thatp+ 2 is either prime or the product of two primes (semiprime). The parity problem suggests that, because the case of interest has an odd number of prime factors (namely 1), it won't be possible to separate out the two cases using sieves. This example is due toSelbergand is given as an exercise with hints by Cojocaru & Murty.[2]: 133–134 The problem is to estimate separately the number of numbers ≤xwith no prime divisors ≤x1/2, that have an even (or an odd) number ofprime factors. It can be shown that, no matter what the choice of weights in aBrun- orSelberg-type sieve, the upper bound obtained will be at least (2 +o(1))x/ lnxfor both problems. But in fact the set with an even number of factors is empty and so has size 0. The set with an odd number of factors is just theprimesbetweenx1/2and x, so by theprime number theoremits size is (1 +o(1))x/ lnx. Thus these sieve methods are unable to give a useful upper bound for the first set, and overestimate the upper bound on the second set by a factor of 2. Beginning around 1996John FriedlanderandHenryk Iwaniecdeveloped some new sieve techniques to "break" the parity problem.[3][4]One of the triumphs of these new methods is theFriedlander–Iwaniec theorem, which states that there are infinitely many primes of the forma2+b4. Glyn Harmanrelates the parity problem to the distinction betweenType IandType IIinformation in a sieve.[5] In 2007Anatolii Alexeevitch Karatsubadiscovered an imbalance between the numbers in an arithmetic progression with given parities of the number of prime factors. His papers[6][7]were published after his death. LetN{\displaystyle \mathbb {N} }be a set ofnatural numbers(positive integers) that is, the numbers1,2,3,…{\displaystyle 1,2,3,\dots }. The set of primes, that is, such integersn∈N{\displaystyle n\in \mathbb {N} },n>1{\displaystyle n>1}, that have just two distinct divisors (namely,n{\displaystyle n}and1{\displaystyle 1}), is denoted byP{\displaystyle \mathbb {P} },P={2,3,5,7,11,…}⊂N{\displaystyle \mathbb {P} =\{2,3,5,7,11,\dots \}\subset \mathbb {N} }. Every natural numbern∈N{\displaystyle n\in \mathbb {N} },n>1{\displaystyle n>1}, can be represented as a product of primes (not necessarily distinct), that isn=p1p2…pk,{\displaystyle n=p_{1}p_{2}\dots p_{k},}wherep1∈P,p2∈P,…,pk∈P{\displaystyle p_{1}\in \mathbb {P} ,\ p_{2}\in \mathbb {P} ,\ \dots ,\ p_{k}\in \mathbb {P} }, and such representation is unique up to the order of factors. If we form two sets, the first consisting of positive integers having even number of prime factors, the second consisting of positive integers having an odd number of prime factors, in their canonical representation, then the two sets are approximately the same size. If, however, we limit our two sets to those positive integers whose canonical representation contains noprimes in arithmetic progression, for example6m+1{\displaystyle 6m+1},m=1,2,…{\displaystyle m=1,2,\dots }or the progressionkm+l{\displaystyle km+l},1≤l<k{\displaystyle 1\leq l<k},(l,k)=1{\displaystyle (l,k)=1},m=0,1,2,…{\displaystyle m=0,1,2,\dots }, then of these positive integers, those with an even number of prime factors will tend to be fewer than those with odd number of prime factors. Karatsuba discovered this property. He found also a formula for this phenomenon, a formula for the difference incardinalitiesof sets of natural numbers with odd and even amount of prime factors, when these factors are complied with certain restrictions. In all cases, since the sets involved are infinite, by "larger" and "smaller" we mean the limit of the ratio of the sets as an upper bound on the primes goes to infinity. In the case of primes containing an arithmetic progression, Karatsuba proved that this limit is infinite. We restate the Karatsuba phenomenon using mathematical terminology. LetN0{\displaystyle \mathbb {N} _{0}}andN1{\displaystyle \mathbb {N} _{1}}be subsets ofN{\displaystyle \mathbb {N} }, such thatn∈N0{\displaystyle n\in \mathbb {N} _{0}}, ifn{\displaystyle n}contains an even number of prime factors, andn∈N1{\displaystyle n\in \mathbb {N} _{1}}, ifn{\displaystyle n}contains an odd number of prime factors. Intuitively, the sizes of the two setsN0{\displaystyle \mathbb {N} _{0}}andN1{\displaystyle \mathbb {N} _{1}}are approximately the same. More precisely, for allx≥1{\displaystyle x\geq 1}, we definen0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}, wheren0(x){\displaystyle n_{0}(x)}is the cardinality of the set of all numbersn{\displaystyle n}fromN0{\displaystyle \mathbb {N} _{0}}such thatn≤x{\displaystyle n\leq x}, andn1(x){\displaystyle n_{1}(x)}is the cardinality of the set of all numbersn{\displaystyle n}fromN1{\displaystyle \mathbb {N} _{1}}such thatn≤x{\displaystyle n\leq x}. The asymptotic behavior ofn0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}was derived byE. Landau:[8] This shows that that isn0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}are asymptotically equal. Further, so that the difference between the cardinalities of the two sets is small. On the other hand, if we letk≥2{\displaystyle k\geq 2}be a natural number, andl1,l2,…lr{\displaystyle l_{1},l_{2},\dots l_{r}}be a sequence of natural numbers,1≤r<φ(k){\displaystyle 1\leq r<\varphi (k)}, such that1≤lj<k{\displaystyle 1\leq l_{j}<k};(lj,k)=1{\displaystyle (l_{j},k)=1}; everylj{\displaystyle l_{j}}are different modulok{\displaystyle k};j=1,2,…r.{\displaystyle j=1,2,\dots r.}LetA{\displaystyle \mathbb {A} }be a set of primes belonging to the progressionskn+lj{\displaystyle kn+l_{j}};j≤r{\displaystyle j\leq r}. (A{\displaystyle \mathbb {A} }is the set of all primes not dividingk{\displaystyle k}). We denote asN∗{\displaystyle \mathbb {N} ^{*}}a set of natural numbers, which do not contain prime factors fromA{\displaystyle \mathbb {A} }, and asN0∗{\displaystyle \mathbb {N} _{0}^{*}}a subset of numbers fromN∗{\displaystyle \mathbb {N} ^{*}}with even number of prime factors, asN1∗{\displaystyle \mathbb {N} _{1}^{*}}a subset of numbers fromN∗{\displaystyle \mathbb {N} ^{*}}with odd number of prime factors. We define the functions Karatsuba proved that forx→+∞{\displaystyle x\to +\infty }, the asymptotic formula is valid, whereC{\displaystyle C}is a positive constant. He also showed that it is possible to prove the analogous theorems for other sets of natural numbers, for example, for numbers which are representable in the form of the sum of two squares, and that sets of natural numbers, all factors of which do belong toA{\displaystyle \mathbb {A} }, will display analogous asymptotic behavior. The Karatsuba theorem was generalized for the case whenA{\displaystyle \mathbf {A} }is a certain unlimited set of primes. The Karatsuba phenomenon is illustrated by the following example. We consider the natural numbers whose canonical representation does not include primes belonging to the progression6m+1{\displaystyle 6m+1},m=1,2,…{\displaystyle m=1,2,\dots }. Then this phenomenon is expressed by the formula:
https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)
Karlstad University(SwedishKarlstads universitet) is a stateuniversityinKarlstad,Sweden. It was originally established as the Karlstad campus of theUniversity of Gothenburgin 1967, and this campus became an independentuniversity collegein 1977 which was granted full university status in 1999 by theGovernment of Sweden. The university has about 40 educational programmes, 30 programme extensions and 900 courses withinhumanities,social studies,science,technology,teaching,health careandarts. As of today, it has approximately 16,000 students and 1,200 employees.[1]Itsuniversity pressis namedKarlstad University Press. The currentRectoris Jerker Moodysson. CTF Service Research Center(SwedishCentrum för tjänsteforskning) at Karlstad University is one of the world's leading research centers focusing on service management and value creation through service. On March 26, 2009, the faculty of Economics, communication and IT formedKarlstad Business School(SwedishHandelshögskolan vid Karlstads universitet) as a brand of their educational programmes in the business related areas. Karlstad University has two a cappella groups, Sällskapet CMB and Söt Likör. Many students live at the nearby student accommodation facilities called Unionen and Campus Futurum. Themottoof the university isSapere aude(Dare to know). Institutions of higher education issuing teaching degrees are obliged to have a board with responsibility for the teacher education programmes. The Faculty Board for Teacher Education is also responsible for educational research. At Karlstad University there is a business school situated with focus on a service perspective. Karlstad Business School has seven disciplines and has a deep interest in economics and business.CTF Service Research Centerconducts world leading research with a focus on value creation through service. The Ingesund School of Music is part of Karlstad University and the Department of Artistic Studies. It is situated in the beautiful Arvika area in mid-Sweden. The school offers music teacher education, music studies, and sound engineering. Karlstad University has a new and environmentally friendly way of heating and cooling the university's buildings. The initiative is one of the largest in a campus area in Europe, making Karlstad University almost self-sufficient for heat and cold. With the new plant, Karlstad University will produce virtually all its heat and cooling locally. This will happen via 270 drilled holes, 200 meters down to the ground. And the environmental benefits are many, among other things, carbon dioxide emissions are radically reduced and energy consumption for heating and cooling buildings can fall by about 70 percent. The investment means that the current district heating is replaced and that heat can instead be supplied byheat pumptechnology. During the summer, heat is stored in the ground and then taken up during the winter. There is no other campus in Europe with a comprehensive geo-energy plant in this size, says Birgitta Hohlfält, Regional Director of Akademiska Hus Väst.[2][3] 59°24′21″N13°34′54″E / 59.40583°N 13.58167°E /59.40583; 13.58167
https://en.wikipedia.org/wiki/Karlstad_University
Adelegateis a form oftype-safefunction pointerused by theCommon Language Infrastructure(CLI). Delegates specify amethodto call and optionally anobjectto call the method on. Delegates are used, among other things, to implementcallbacksandevent listeners. A delegate object encapsulates a reference to a method. The delegate object can then be passed to code that can call thereferencedmethod, without having to know at compile time which method will be invoked. Amulticast delegateis a delegate that points to several methods.[1][2]Multicastdelegation is a mechanism that provides functionality to execute more than one method. There is a list of delegates maintained internally, and when the multicast delegate is invoked, the list of delegates is executed. In C#, delegates are often used to implement callbacks in event driven programming. For example, a delegate may be used to indicate which method should be called when the user clicks on some button. Delegates allow the programmer to notify several methods that an event has occurred.[3] Code to declare adelegatetype, namedSendMessageDelegate, which takes aMessageas a parameter and returnsvoid: Code to define a method that takes an instantiated delegate as its argument: The implemented method that runs when the delegate is called: Code to call the SendMessage method, passing an instantiated delegate as an argument: A delegate variable calls the associated method and is called as follows: Delegate variables arefirst-class objectsof the formnewDelegateType(obj.Method)and can be assigned to any matching method, or to the valuenull. They store a methodandits receiver without any parameters:[4] The objectfunnyObjcan bethisand omitted. If the method isstatic, it should not be the object (also called an instance in other languages), but the class itself. It should not beabstract, but could benew,overrideorvirtual. To call a method with a delegate successfully, the method signature has to match theDelegateTypewith the same number of parameters of the same kind (ref,out,value) with the same type (including return type). A delegate variable can hold multiple values at the same time: If the multicast delegate is a function or has nooutparameter, the parameter of the last call is returned.[5] Although internalimplementationsmay vary, delegateinstancescan be thought of as atupleof anobjectand amethodpointerand areference(possibly null) to another delegate. Hence a reference to one delegate is possibly a reference to multiple delegates. When the first delegate has finished, if its chain reference is not null, the next will be invoked, and so on until the list is complete. This pattern allows aneventto have overhead scaling easily from that of a single reference up to dispatch to a list of delegates, and is widely used in the CLI. Performance of delegates used to be much slower than avirtualorinterfacemethod call (6 to 8 times slower in Microsoft's 2003 benchmarks),[6]but, since the.NET2.0CLRin 2005, it is about the same as interface calls.[7]This means there is a small added overhead compared to direct method invocations. There are very stringent rules on the construction of delegate classes. These rules permit optimizing compilers a great deal of leeway when optimizing delegates while ensuring type safety.[citation needed]
https://en.wikipedia.org/wiki/Delegate_(CLI)
In awritten language, alogogram(fromAncient Greeklogos'word', andgramma'that which is drawn or written'), alsologographorlexigraph, is awritten characterthat represents asemanticcomponent of a language, such as awordormorpheme.Chinese charactersas used inChineseas well as other languages are logograms, as areEgyptian hieroglyphsand characters incuneiform script. Awriting systemthat primarily uses logograms is called alogography. Non-logographic writing systems, such asalphabetsandsyllabaries, arephonemic: their individual symbols represent sounds directly and lack any inherent meaning. However, all known logographies have some phonetic component, generally based on therebus principle, and the addition of a phonetic component to pureideographsis considered to be a key innovation in enabling the writing system to adequately encode human language. Some of the earliest recorded writing systems are logographic; the first historical civilizations of Mesopotamia, Egypt, China and Mesoamerica all used some form of logographic writing.[1][2] All logographic scripts ever used fornatural languagesrely on therebus principleto extend a relatively limited set of logograms: A subset of characters is used for their phonetic values, either consonantal or syllabic. The termlogosyllabaryis used to emphasize the partially phonetic nature of these scripts when the phonetic domain is the syllable. In Ancient Egyptianhieroglyphs, Ch'olti', and in Chinese, there has been the additional development ofdeterminatives, which are combined with logograms to narrow down their possible meaning. In Chinese, they are fused with logographic elements used phonetically; such "radicaland phonetic" characters make up the bulk of the script. Ancient Egyptian and Chinese relegated the active use of rebus to the spelling of foreign and dialectical words. Logoconsonantal scripts have graphemes that may be extended phonetically according to the consonants of the words they represent, ignoring the vowels. For example, Egyptian was used to write bothsȝ'duck' andsȝ'son', though it is likely that these words were not pronounced the same except for their consonants. The primary examples of logoconsonantal scripts areEgyptian hieroglyphs,hieratic, anddemotic:Ancient Egyptian. Logosyllabic scripts havegraphemeswhich represent morphemes, often polysyllabic morphemes, but when extended phonetically represent single syllables. They include cuneiform,Anatolian hieroglyphs,Cretan hieroglyphs,Linear AandLinear B,Chinese characters,Maya script,Aztec script,Mixtec script, and the first five phases of theBamum script. A peculiar system of logograms developed within thePahlavi scripts(developed from theabjadofAramaic) used to writeMiddle Persianduring much of theSassanid period; the logograms were composed of letters that spelled out the word inAramaicbut were pronounced as in Persian (for instance, the combinationm-l-kwould be pronounced "shah"). These logograms, calledhozwārishn(a form ofheterograms), were dispensed with altogether after theArab conquest of Persiaand the adoption of avariantof theArabic alphabet.[citation needed] All historical logographic systems include a phonetic dimension, as it is impractical to have a separate basic character for every word or morpheme in a language.[a]In some cases, such as cuneiform as it was used for Akkadian, the vast majority of glyphs are used for their sound values rather than logographically. Many logographic systems also have a semantic/ideographic component (seeideogram), called "determinatives" in the case of Egyptian and "radicals" in the case of Chinese.[b] Typical Egyptian usage was to augment a logogram, which may potentially represent several words with different pronunciations, with a determinate to narrow down the meaning, and a phonetic component to specify the pronunciation. In the case of Chinese, the vast majority of characters are a fixed combination of a radical that indicates its nominal category, plus a phonetic to give an idea of the pronunciation. The Mayan system used logograms with phonetic complements like the Egyptian, while lacking ideographic components. Not all logograms are associated with one specific language, and some are not associated with any language at all. Theampersandis a logogram in the Latin script,[3]a combination of the letters "e" and "t." In Latin, "et" translates to "and," and the ampersand is still used to represent this word today, however, it does so in a variety of languages, being a representative of morphemes "and," "y," or "en," if they are a speaker of English, Spanish, or Dutch, respectively. Outside of any script isUnicode, a compilation of characters of various meanings. They state their intention to build the standard to include every character from every language.[4]It's the generally accepted standard for computer character encoding, but others, likeASCIIandBaudot, exist and serve various purposes in digital communication. Many logograms in these databases are ubiquitous, and are used on the Internet by users worldwide. Chinese scholars have traditionally classified the Chinese characters (hànzì) into six types by etymology. The first two types are "single-body", meaning that the character was created independently of other characters. "Single-body" pictograms and ideograms make up only a small proportion of Chinese logograms. More productive for the Chinese script were the two "compound" methods, i.e. the character was created from assembling different characters. Despite being called "compounds", these logograms are still single characters, and are written to take up the same amount of space as any other logogram. The final two types are methods in the usage of characters rather than the formation of characters themselves. The most productive method of Chinese writing, the radical-phonetic, was made possible by ignoring certain distinctions in the phonetic system of syllables. InOld Chinese, post-final ending consonants/s/and/ʔ/were typically ignored; these developed intotonesinMiddle Chinese, which were likewise ignored when new characters were created. Also ignored were differences in aspiration (between aspirated vs. unaspiratedobstruents, and voiced vs. unvoiced sonorants); the Old Chinese difference between type-A and type-B syllables (often described as presence vs. absence ofpalatalizationorpharyngealization); and sometimes, voicing of initial obstruents and/or the presence of a medial/r/after the initial consonant. In earlier times, greater phonetic freedom was generally allowed. During Middle Chinese times, newly created characters tended to match pronunciation exactly, other than the tone – often by using as the phonetic component a character that itself is a radical-phonetic compound. Due to the long period of language evolution, such component "hints" within characters as provided by the radical-phonetic compounds are sometimes useless and may be misleading in modern usage. As an example, based on每'each', pronouncedměiinStandard Mandarin, are the characters侮'to humiliate',悔'to regret', and海'sea', pronounced respectivelywǔ,huǐ, andhǎiin Mandarin. Three of these characters were pronounced very similarly in Old Chinese –/mˤəʔ/(每),/m̥ˤəʔ/(悔), and/m̥ˤəʔ/(海) according to a recent reconstruction byWilliam H. BaxterandLaurent Sagart[6]– butsound changesin the intervening 3,000 years or so (including two different dialectal developments, in the case of the last two characters) have resulted in radically different pronunciations. Within the context of the Chinese language, Chinese characters (known ashanzi) by and large represent words and morphemes rather than pure ideas; however, the adoption of Chinese characters by the Japanese and Korean languages (where they are known askanjiandhanja, respectively) have resulted in some complications to this picture. Many Chinese words, composed of Chinese morphemes, were borrowed into Japanese and Korean together with their character representations; in this case, the morphemes and characters were borrowed together. In other cases, however, characters were borrowed to represent native Japanese and Korean morphemes, on the basis of meaning alone. As a result, a single character can end up representing multiple morphemes of similar meaning but with different origins across several languages. Because of this, kanji and hanja are sometimes described asmorphographicwriting systems.[7] Because much research onlanguage processinghas centered on English and other alphabetically written languages, many theories of language processing have stressed the role of phonology in producing speech. Contrasting logographically coded languages, where a single character is represented phonetically and ideographically, with phonetically/phonemically spelled languages has yielded insights into how different languages rely on different processing mechanisms. Studies on the processing of logographically coded languages have amongst other things looked at neurobiological differences in processing, with one area of particular interest being hemispheric lateralization. Since logographically coded languages are more closely associated with images than alphabetically coded languages, several researchers have hypothesized that right-side activation should be more prominent in logographically coded languages. Although some studies have yielded results consistent with this hypothesis there are too many contrasting results to make any final conclusions about the role of hemispheric lateralization in orthographically versus phonetically coded languages.[8] Another topic that has been given some attention is differences in processing of homophones. Verdonschot et al.[9]examined differences in the time it took to read a homophone out loud when a picture that was either related or unrelated[10]to a homophonic character was presented before the character. Both Japanese and Chinese homophones were examined. Whereas word production of alphabetically coded languages (such as English) has shown a relatively robust immunity to the effect of context stimuli,[11]Verdschot et al.[12]found that Japanese homophones seem particularly sensitive to these types of effects. Specifically, reaction times were shorter when participants were presented with a phonologically related picture before being asked to read a target character out loud. An example of a phonologically related stimulus from the study would be for instance when participants were presented with a picture of an elephant, which is pronouncedzouin Japanese, before being presented with the Chinese character造, which is also readzou. No effect of phonologically related context pictures were found for the reaction times for reading Chinese words. A comparison of the (partially) logographically coded languages Japanese and Chinese is interesting because whereas the Japanese language consists of more than 60% homographic heterophones (characters that can be read two or more different ways), most Chinese characters only have one reading. Because both languages are logographically coded, the difference in latency in reading aloud Japanese and Chinese due to context effects cannot be ascribed to the logographic nature of the writing systems. Instead, the authors hypothesize that the difference in latency times is due to additional processing costs in Japanese, where the reader cannot rely solely on a direct orthography-to-phonology route, but information on a lexical-syntactical level must also be accessed in order to choose the correct pronunciation. This hypothesis is confirmed by studies finding that JapaneseAlzheimer's diseasepatients whose comprehension of characters had deteriorated still could read the words out loud with no particular difficulty.[13][14] Studies contrasting the processing of English and Chinese homophones inlexical decision taskshave found an advantage for homophone processing in Chinese, and a disadvantage for processing homophones in English.[15]The processing disadvantage in English is usually described in terms of the relative lack of homophones in the English language. When a homophonic word is encountered, the phonological representation of that word is first activated. However, since this is an ambiguous stimulus, a matching at the orthographic/lexical ("mental dictionary") level is necessary before the stimulus can be disambiguated, and the correct pronunciation can be chosen. In contrast, in a language (such as Chinese) where many characters with the same reading exists, it is hypothesized that the person reading the character will be more familiar with homophones, and that this familiarity will aid the processing of the character, and the subsequent selection of the correct pronunciation, leading to shorter reaction times when attending to the stimulus. In an attempt to better understand homophony effects on processing, Hino et al.[11]conducted a series of experiments using Japanese as their target language. While controlling for familiarity, they found a processing advantage for homophones over non-homophones in Japanese, similar to what has previously been found in Chinese. The researchers also tested whether orthographically similar homophones would yield a disadvantage in processing, as has been the case with English homophones,[16]but found no evidence for this. It is evident that there is a difference in how homophones are processed in logographically coded and alphabetically coded languages, but whether the advantage for processing of homophones in the logographically coded languages Japanese and Chinese (i.e. their writing systems) is due to the logographic nature of the scripts, or if it merely reflects an advantage for languages with more homophones regardless of script nature, remains to be seen. The main difference between logograms and other writing systems is that the graphemes are not linked directly to their pronunciation. An advantage of this separation is that understanding of the pronunciation or language of the writer is unnecessary, e.g.1is understood regardless of whether it be calledone,ichiorwāḥidby its reader. Likewise, people speaking differentvarieties of Chinesemay not understand each other in speaking, but may do so to a significant extent in writing even if they do not write inStandard Chinese. Therefore, in China, Vietnam, Korea, and Japan before modern times, communication by writing (筆談) was the norm ofEast Asianinternational trade and diplomacy usingClassical Chinese.[citation needed][dubious–discuss] This separation, however, also has the great disadvantage of requiring the memorization of the logograms when learning to read and write, separately from the pronunciation. Though not from an inherent feature of logograms but due to its unique history of development, Japanese has the added complication that almost every logogram has more than one pronunciation. Conversely, a phonetic character set is written precisely as it is spoken, but with the disadvantage that slight pronunciation differences introduce ambiguities. Many alphabetic systems such as those ofGreek,Latin,Italian,Spanish, andFinnishmake the practical compromise of standardizing how words are written while maintaining a nearly one-to-one relation between characters and sounds. Orthographies in some other languages, such asEnglish,French,ThaiandTibetan, are all more complicated than that; character combinations are often pronounced in multiple ways, usually depending on their history.Hangul, theKorean language's writing system, is an example of an alphabetic script that was designed to replace the logogrammatichanjain order to increase literacy. The latter is now rarely used, but retains some currency in South Korea, sometimes in combination with hangul.[citation needed] According to government-commissioned research, the most commonly used 3,500 characters listed in thePeople's Republic of China's "Chart of Common Characters of Modern Chinese" (现代汉语常用字表,Xiàndài Hànyǔ Chángyòngzì Biǎo) cover 99.48% of a two-million-word sample. As for the case of traditional Chinese characters, 4,808 characters are listed in the "Chart of Standard Forms of Common National Characters" (常用國字標準字體表) by the Ministry of Education of theRepublic of China, while 4,759 in the "List of Graphemes of Commonly-Used Chinese Characters" (常用字字形表) by the Education and Manpower Bureau ofHong Kong, both of which are intended to be taught duringelementaryandjunior secondaryeducation. Education after elementary school includes not as many new characters as new words, which are mostly combinations of two or more already learned characters.[17] Entering complex characters can be cumbersome on electronic devices due to a practical limitation in the number of input keys. There exist variousinput methodsfor entering logograms, either by breaking them up into their constituent parts such as with theCangjieandWubi methodsof typing Chinese, or using phonetic systems such asBopomofoorPinyinwhere the word is entered as pronounced and then selected from a list of logograms matching it. While the former method is (linearly) faster, it is more difficult to learn. With the Chinese alphabet system however, the strokes forming the logogram are typed as they are normally written, and the corresponding logogram is then entered.[clarification needed] Also due to the number of glyphs, in programming and computing in general, more memory is needed to store each grapheme, as the character set is larger. As a comparison,ISO 8859requires only onebytefor each grapheme, while theBasic Multilingual Planeencoded inUTF-8requires up to three bytes. On the other hand, English words, for example, average five characters and a space per word[18][self-published source]and thus need six bytes for every word. Since many logograms contain more than one grapheme, it is not clear which is more memory-efficient.Variable-width encodingsallow a unified character encoding standard such asUnicodeto use only the bytes necessary to represent a character, reducing the overhead that results merging large character sets with smaller ones.
https://en.wikipedia.org/wiki/Logogram
Instatistical theory, apseudolikelihoodis anapproximationto thejoint probability distributionof a collection ofrandom variables. The practical use of this is that it can provide an approximation to thelikelihood functionof a set of observed data which may either provide a computationally simpler problem forestimation, or may provide a way of obtaining explicit estimates of model parameters. The pseudolikelihood approach was introduced byJulian Besag[1]in the context of analysing data havingspatial dependence. Given a set of random variablesX=X1,X2,…,Xn{\displaystyle X=X_{1},X_{2},\ldots ,X_{n}}the pseudolikelihood ofX=x=(x1,x2,…,xn){\displaystyle X=x=(x_{1},x_{2},\ldots ,x_{n})}is in discrete case and in continuous one. HereX{\displaystyle X}is a vector of variables,x{\displaystyle x}is a vector of values,pθ(⋅∣⋅){\displaystyle p_{\theta }(\cdot \mid \cdot )}is conditional density andθ=(θ1,…,θp){\displaystyle \theta =(\theta _{1},\ldots ,\theta _{p})}is the vector of parameters we are to estimate. The expressionX=x{\displaystyle X=x}above means that each variableXi{\displaystyle X_{i}}in the vectorX{\displaystyle X}has a corresponding valuexi{\displaystyle x_{i}}in the vectorx{\displaystyle x}andx−i=(x1,…,x^i,…,xn){\displaystyle x_{-i}=(x_{1},\ldots ,{\hat {x}}_{i},\ldots ,x_{n})}means that the coordinatexi{\displaystyle x_{i}}has been omitted. The expressionPrθ(X=x){\displaystyle \mathrm {Pr} _{\theta }(X=x)}is the probability that the vector of variablesX{\displaystyle X}has values equal to the vectorx{\displaystyle x}. This probability of course depends on the unknown parameterθ{\displaystyle \theta }. Because situations can often be described using state variables ranging over a set of possible values, the expressionPrθ(X=x){\displaystyle \mathrm {Pr} _{\theta }(X=x)}can therefore represent the probability of a certain state among all possible states allowed by the state variables. Thepseudo-log-likelihoodis a similar measure derived from the above expression, namely (in discrete case) One use of the pseudolikelihood measure is as an approximation for inference about aMarkovorBayesian network, as the pseudolikelihood of an assignment toXi{\displaystyle X_{i}}may often be computed more efficiently than the likelihood, particularly when the latter may require marginalization over a large number of variables. Use of the pseudolikelihood in place of the true likelihood function in amaximum likelihoodanalysis can lead to good estimates, but a straightforward application of the usual likelihood techniques to derive information about estimation uncertainty, or forsignificance testing, would in general be incorrect.[2]
https://en.wikipedia.org/wiki/Pseudolikelihood
Gaskin v UK(1989) 12 EHRR 36 was a legal case from theUnited Kingdom, heard by theEuropean Court of Human RightsinStrasbourg. Graham Gaskin was placed in public care in the UK as a baby, where he stayed until he reached his maturity. Gaskin claimed he had been abused during his time in care and he requested access to the records kept on him byLiverpoolSocial Services. Liverpool City Council gave Gaskin partial access, claiming that a duty of confidentiality owed to third party contributors prohibited disclosure of the remainder of his records. Gaskin appealed to the Court of Appeal which upheld Liverpool City Council's refusal to give him access. The Court of Appeal held that it was not in the 'public interest' to grant access to Gaskin's records because to do so would inhibit third party informants from coming forward with information to Social Services. Granting access would necessarily reveal the identities of these third parties. The Court of Appeal therefore felt that access would undermine the British system which depends on information being supplied to the Authorities by the public 'in confidence'. Gaskin appealed to the European Court of Human Rights in Strasbourg and his case was decided in 1989. The Court decided that Gaskin's Article 8 right to have his private and family life respected by the State had been breached by the British government because there had been noindependentappeal body to which Gaskin could have taken his case. The Court also decided that people in Gaskin's position, who had been in public care as children, should not in principle be obstructed from accessing their care records—these records acted as the memories of parents, to which most individuals had access but to which people in Gaskin's position did not. The Gaskin case has had a substantial impact on British law. TheData Protection Actcontains special provisions whereby social services records are accessible by people formerly in public care irrespective of whether the records are kept electronically or within a paper-based filing system. At the time of the case, only information stored electronically was accessible by individuals under the UK's Data Protection regime; it now covers computer systems and any other filing system. The Information Commission now provides the independent appeal mechanism absent at the time when Gaskin sought access to his case file. In summary, the Gaskin case was a significant victory for individuals who were placed in public care as children. Such individuals now have limited access to their own records to the extent that knowledge and understanding of their childhood and early development will be revealed. However, the case files of individuals who were placed in care with the independent sector (the charities) are not caught by the access provisions of the Data Protection Act. Access to these files can only be obtained with the agreement of these organisations.
https://en.wikipedia.org/wiki/Gaskin_v_United_Kingdom