text
stringlengths
21
172k
source
stringlengths
32
113
Software archaeologyorsource code archeologyis the study of poorly documented or undocumentedlegacy softwareimplementations, as part ofsoftware maintenance.[1][2]Software archaeology, named by analogy witharchaeology,[3]includes thereverse engineeringof software modules, and the application of a variety of tools and processes for extracting and understanding program structure and recovering design information.[1][4]Software archaeology may reveal dysfunctional team processes which have produced poorly designed or even unused software modules, and in some cases deliberatelyobfuscatory codemay be found.[5]The term has been in use for decades.[6] Software archaeology has continued to be a topic of discussion at more recent software engineering conferences.[7] A workshop on Software Archaeology at the 2001OOPSLA(Object-Oriented Programming, Systems, Languages & Applications) conference identified the following software archaeology techniques, some of which are specific toobject-oriented programming:[8] More generally,Andy HuntandDave Thomasnote the importance ofversion control,dependency management, text indexing tools such as GLIMPSE andSWISH-E, and "[drawing] a map as you begin exploring."[8] Like true archaeology, software archaeology involves investigative work to understand the thought processes of one's predecessors.[8]At the OOPSLA workshop,Ward Cunninghamsuggested a synoptic signature analysis technique which gave an overall "feel" for a program by showing only punctuation, such as semicolons andcurly braces.[9]In the same vein, Cunningham has suggested viewing programs in 2 point font in order to understand the overall structure.[10]Another technique identified at the workshop was the use ofaspect-oriented programmingtools such asAspectJto systematically introducetracingcode without directly editing the legacy program.[8] Network and temporal analysis techniques can reveal the patterns of collaborative activity by the developers of legacy software, which in turn may shed light on the strengths and weaknesses of the software artifacts produced.[11] Michael Rozlog ofEmbarcadero Technologieshas described software archaeology as a six-step process which enables programmers to answer questions such as "What have I just inherited?" and "Where are the scary sections of the code?"[12]These steps, similar to those identified by the OOPSLA workshop, include using visualization to obtain a visual representation of the program's design, usingsoftware metricsto look for design and style violations, usingunit testingandprofilingto look for bugs and performance bottlenecks, and assembling design information recovered by the process.[12]Software archaeology can also be a service provided to programmers by external consultants.[13] The profession of "programmer–archaeologist" features prominently inVernor Vinge's 1999 sci-fi novelA Deepness in the Sky.[14]
https://en.wikipedia.org/wiki/Software_archaeology
Telephone tappingin theEastern Blocwas a widespread method of themass surveillanceof the population by thesecret police.[1] In the past, telephone tapping was an open and legal practice in certain countries.[2]Duringmartial law in Poland, official censorship was introduced, which included open phone tapping. Despite the introduction of the new censorship division, thePolish secret policedid not have resources to monitor all conversations.[3] InRomania,telephonetapping was conducted by the General Directorate for Technical Operations of theSecuritate.[4]Created withSovietassistance in 1954, the outfit monitored all voice and electroniccommunicationsinside and outside of Romania. Theybuggedtelephones and intercepted alltelegraphsandtelexmessages, as well as placedmicrophonesin both public and private buildings.[5] The 1991Polishcomedy filmCalls Controlled[6]capitalizes on this fact. The title alludes to the pre-recorded message "Rozmowa kontrolowana" ("The call is being monitored") being sounded during phone calls while themartial law in Polandwas in force during the 1980s.[7][8] The 2006 filmThe Lives of Othersconcerns aStasicaptain who is listening to the conversations of a suspected dissident writer in a bugged apartment with equipment including telephone-tapping.[9][10] This article about theCold Waris astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Telephone_tapping_in_the_Eastern_Bloc
TheFrank–Wolfe algorithmis aniterativefirst-orderoptimizationalgorithmforconstrainedconvex optimization. Also known as theconditional gradient method,[1]reduced gradient algorithmand theconvex combination algorithm, the method was originally proposed byMarguerite FrankandPhilip Wolfein 1956.[2]In each iteration, the Frank–Wolfe algorithm considers alinear approximationof the objective function, and moves towards a minimizer of this linear function (taken over the same domain). SupposeD{\displaystyle {\mathcal {D}}}is acompactconvex setin avector spaceandf:D→R{\displaystyle f\colon {\mathcal {D}}\to \mathbb {R} }is aconvex,differentiablereal-valued function. The Frank–Wolfe algorithm solves theoptimization problem While competing methods such asgradient descentfor constrained optimization require aprojection stepback to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a convex problem over the same set in each iteration, and automatically stays in the feasible set. The convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum isO(1/k){\displaystyle O(1/k)}afterkiterations, so long as the gradient isLipschitz continuouswith respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately.[3] The iterations of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization inmachine learningandsignal processingproblems,[4]as well as for example the optimization ofminimum–cost flowsintransportation networks.[5] If the feasible set is given by a set of linear constraints, then the subproblem to be solved in each iteration becomes alinear program. While the worst-case convergence rate withO(1/k){\displaystyle O(1/k)}can not be improved in general, faster convergence can be obtained for special problem classes, such as some strongly convex problems.[6] Sincef{\displaystyle f}isconvex, for any two pointsx,y∈D{\displaystyle \mathbf {x} ,\mathbf {y} \in {\mathcal {D}}}we have: This also holds for the (unknown) optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}. That is,f(x∗)≥f(x)+(x∗−x)T∇f(x){\displaystyle f(\mathbf {x} ^{*})\geq f(\mathbf {x} )+(\mathbf {x} ^{*}-\mathbf {x} )^{T}\nabla f(\mathbf {x} )}. The best lower bound with respect to a given pointx{\displaystyle \mathbf {x} }is given by The latter optimization problem is solved in every iteration of the Frank–Wolfe algorithm, therefore the solutionsk{\displaystyle \mathbf {s} _{k}}of the direction-finding subproblem of thek{\displaystyle k}-th iteration can be used to determine increasing lower boundslk{\displaystyle l_{k}}during each iteration by settingl0=−∞{\displaystyle l_{0}=-\infty }and Such lower bounds on the unknown optimal value are important in practice because they can be used as a stopping criterion, and give an efficient certificate of the approximation quality in every iteration, since alwayslk≤f(x∗)≤f(xk){\displaystyle l_{k}\leq f(\mathbf {x} ^{*})\leq f(\mathbf {x} _{k})}. It has been shown that this correspondingduality gap, that is the difference betweenf(xk){\displaystyle f(\mathbf {x} _{k})}and the lower boundlk{\displaystyle l_{k}}, decreases with the same convergence rate, i.e.f(xk)−lk=O(1/k).{\displaystyle f(\mathbf {x} _{k})-l_{k}=O(1/k).}
https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm
Inmatrix theory, thePerron–Frobenius theorem, proved byOskar Perron(1907) andGeorg Frobenius(1912), asserts that areal square matrixwith positive entries has a uniqueeigenvalueof largest magnitude and that eigenvalue is real. The correspondingeigenvectorcan be chosen to have strictly positive components, and also asserts a similar statement for certain classes ofnonnegative matrices. This theorem has important applications to probability theory (ergodicityofMarkov chains); to the theory ofdynamical systems(subshifts of finite type); to economics (Okishio's theorem,[1]Hawkins–Simon condition[2]); to demography (Leslie population age distribution model);[3]to social networks (DeGroot learning process); to Internet search engines (PageRank);[4]and even to ranking of American football teams.[5]The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors isEdmund Landau.[6][7] Letpositiveandnon-negativerespectively describematriceswith exclusivelypositivereal numbers as elements and matrices with exclusively non-negative real numbers as elements. Theeigenvaluesof a realsquare matrixAarecomplex numbersthat make up thespectrumof the matrix. Theexponential growth rateof the matrix powersAkask→ ∞ is controlled by the eigenvalue ofAwith the largestabsolute value(modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors whenAis a non-negative real square matrix. Early results were due toOskar Perron(1907) and concerned positive matrices. Later,Georg Frobenius(1912) found their extension to certain classes of non-negative matrices. LetA=(aij){\displaystyle A=(a_{ij})}be ann×n{\displaystyle n\times n}positive matrix:aij>0{\displaystyle a_{ij}>0}for1≤i,j≤n{\displaystyle 1\leq i,j\leq n}. Then the following statements hold. All of these properties extend beyond strictly positive matrices toprimitive matrices(see below). Facts 1–7 can be found in Meyer[12]chapter 8claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectorswandvare sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes calledstochastic eigenvectors. Often they are normalized so that the right eigenvectorvsums to one, whilewTv=1{\displaystyle w^{T}v=1}. There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater thanor equal, in absolute value, to all other eigenvalues.[13][14]However, for the exampleA=(0110){\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)}, the maximum eigenvaluer= 1 has the same absolute value as the other eigenvalue −1; while forA=(0100){\displaystyle A=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right)}, the maximum eigenvalue isr= 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices —irreduciblematrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the formωr{\displaystyle \omega r}, wherer{\displaystyle r}is a real strictly positive eigenvalue, andω{\displaystyle \omega }ranges over the complexh'throots of 1for some positive integerhcalled theperiodof the matrix. The eigenvector corresponding tor{\displaystyle r}has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below. LetAbe an×nsquare matrix overfieldF. The matrixAisirreducibleif any of the following equivalent properties holds. Definition 1 :Adoes not have non-trivial invariantcoordinatesubspaces. Here a non-trivial coordinate subspace means alinear subspacespanned by anyproper subsetof standard basis vectors ofFn. More explicitly, for any linear subspace spanned by standard basis vectorsei1, ...,eik, 0 <k<nits image under the action ofAis not contained in the same subspace. Definition 2:Acannot be conjugated into block upper triangular form by apermutation matrixP: whereEandGare non-trivial (i.e. of size greater than zero) square matrices. Definition 3:One can associate with a matrixAa certaindirected graphGA. It hasnvertices labeled 1,...,n, and there is an edge from vertexito vertexjprecisely whenaij≠ 0. Then the matrixAis irreducible if and only if its associated graphGAisstrongly connected. IfFis the field of real or complex numbers, then we also have the following condition. Definition 4:Thegroup representationof(R,+){\displaystyle (\mathbb {R} ,+)}onRn{\displaystyle \mathbb {R} ^{n}}or(C,+){\displaystyle (\mathbb {C} ,+)}onCn{\displaystyle \mathbb {C} ^{n}}given byt↦exp⁡(tA){\displaystyle t\mapsto \exp(tA)}has no non-trivial invariant coordinate subspaces. (By comparison, this would be anirreducible representationif there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix isreducibleif it is not irreducible. A real matrixAisprimitiveif it is non-negative and itsmth power is positive for some natural numberm(i.e. all entries ofAmare positive). LetAbe real and non-negative. Fix an indexiand define theperiod of indexito be thegreatest common divisorof all natural numbersmsuch that (Am)ii> 0. WhenAis irreducible, the period of every index is the same and is called theperiod ofA.In fact, whenAis irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths inGA(see Kitchens[15]page 16). The period is also called the index of imprimitivity (Meyer[12]page 674) or the order of cyclicity. If the period is 1,Aisaperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912. LetA{\displaystyle A}be an irreducible non-negativeN×N{\displaystyle N\times N}matrix with periodh{\displaystyle h}andspectral radiusρ(A)=r{\displaystyle \rho (A)=r}. Then the following statements hold. whereO{\displaystyle O}denotes a zero matrix and the blocks along the main diagonal are square matrices. The exampleA=(001001110){\displaystyle A=\left({\begin{smallmatrix}0&0&1\\0&0&1\\1&1&0\end{smallmatrix}}\right)}shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocksAjneed not be square, andhneed not dividen. LetAbe an irreducible non-negative matrix, then: A matrixAis primitive provided it is non-negative andAmis positive for somem, and henceAkis positive for allk ≥ m. To check primitivity, one needs a bound on how large the minimal suchmcan be, depending on the size ofA:[24] Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrixAmay be written in upper-triangular block form (known as thenormal form of a reducible matrix)[25] wherePis a permutation matrix and eachBiis a square matrix that is either irreducible or zero. Now ifAis non-negative then so too is each block ofPAP−1, moreover the spectrum ofAis just the union of the spectra of theBi. The invertibility ofAcan also be studied. The inverse ofPAP−1(if it exists) must have diagonal blocks of the formBi−1so if anyBiisn't invertible then neither isPAP−1orA. Conversely letDbe the block-diagonal matrix corresponding toPAP−1, in other wordsPAP−1with the asterisks zeroised. If eachBiis invertible then so isDandD−1(PAP−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (ifNk= 0 the inverse of 1 −Nis 1 +N+N2+ ... +Nk−1) soPAP−1andAare both invertible. Therefore, many of the spectral properties ofAmay be deduced by applying the theorem to the irreducibleBi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive. A row (column)stochastic matrixis a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. IfAis row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. IfAis row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. The theorem has particular use inalgebraic graph theory. The "underlying graph" of a nonnegativen-square matrix is the graph with vertices numbered 1, ...,nand arcijif and only ifAij≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, theadjacency matrixof astrongly connected graphis irreducible.[26][27] The theorem has a natural interpretation in the theory of finiteMarkov chains(where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on thesubshift of finite type). More generally, it can be extended to the case of non-negativecompact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name oftransfer operators, or sometimesRuelle–Perron–Frobenius operators(afterDavid Ruelle). In this case, the leading eigenvalue corresponds to thethermodynamic equilibriumof adynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering thearrow of timein what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view ofpoint-set topology.[28] A common thread in many proofs is theBrouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used theCollatz–Wielandt formula described above to extend and clarify Frobenius's work.[29]Another proof is based on thespectral theory[30]from which part of the arguments are borrowed. IfAis a positive (or more generally primitive) matrix, then there exists a real positive eigenvaluer(Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, henceris thespectral radiusofA. This statement does not hold for general non-negative irreducible matrices, which haveheigenvalues with the same absolute eigenvalue asr, wherehis the period ofA. LetAbe a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise considerA/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integermsuch thatAmis a positive matrix and the real part of λmis negative. Let ε be half the smallest diagonal entry ofAmand setT=Am−εIwhich is yet another positive matrix. Moreover, ifAx=λxthenAmx=λmxthusλm−εis an eigenvalue ofT. Because of the choice ofmthis point lies outside the unit disk consequentlyρ(T) > 1. On the other hand, all the entries inTare positive and less than or equal to those inAmso byGelfand's formulaρ(T) ≤ρ(Am) ≤ρ(A)m= 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices. Given a non-negativeA, assume there existsm, such thatAmis positive, thenAm+1,Am+2,Am+3,... are all positive. Am+1=AAm, so it can have zero element only if some row ofAis entirely zero, but in this case the same row ofAmwill be zero. Applying the same arguments as above for primitive matrices, prove the main claim. For a positive (or more generally irreducible non-negative) matrixAthe dominanteigenvectoris real and strictly positive (for non-negativeArespectively non-negative.) This can be established using thepower method, which states that for a sufficiently generic (in the sense below) matrixAthe sequence of vectorsbk+1=Abk/ |Abk| converges to theeigenvectorwith the maximumeigenvalue. (The initial vectorb0can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vectorb0produces the sequence of non-negative vectorsbk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector forA, proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexesi,jthere existsm, such that (Am)ijis strictly positive. Given a non-negative eigenvectorv, and that at least one of its components sayi-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, givennsuch that (An)ii>0, hence:rnvi=Anvi≥ (An)iivi>0. Henceris strictly positive. The eigenvector is strict positivity. Then givenm, such that (Am)ji>0, hence:rmvj= (Amv)j≥ (Am)jivi>0, hencevjis strictly positive, i.e., the eigenvector is strictly positive. This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalueris one-dimensional. The arguments here are close to those in Meyer.[12] Given a strictly positive eigenvectorvcorresponding torand another eigenvectorwwith the same eigenvalue. (The vectorsvandwcan be chosen to be real, becauseAandrare both real, so the null space ofA-rhas a basis consisting of real vectors.) Assuming at least one of the components ofwis positive (otherwise multiplywby −1). Given maximal possibleαsuch thatu=v- α wis non-negative, then one of the components ofuis zero, otherwiseαis not maximum. Vectoruis an eigenvector. It is non-negative, hence by the lemma described in theprevious sectionnon-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component ofuis zero. The contradiction implies thatwdoes not exist. Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvaluerand all other eigenvalues which have the same absolute value. If there is a Jordan block, then theinfinity norm(A/r)k∞tends to infinity fork → ∞, but that contradicts the existence of the positive eigenvector. Givenr= 1, orA/r. Lettingvbe a Perron–Frobenius strictly positive eigenvector, soAv=v, then: ‖v‖∞=‖Akv‖∞≥‖Ak‖∞mini(vi),⇒‖Ak‖∞≤‖v‖/mini(vi){\displaystyle \|v\|_{\infty }=\|A^{k}v\|_{\infty }\geq \|A^{k}\|_{\infty }\min _{i}(v_{i}),~~\Rightarrow ~~\|A^{k}\|_{\infty }\leq \|v\|/\min _{i}(v_{i})}So ‖Ak‖∞is bounded for allk. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak‖∞is unbounded. For a two by two matrix: hence ‖Jk‖∞= |k+λ| (for |λ| = 1), so it tends to infinity whenkdoes so. SinceJk=C−1AkC, thenAk≥Jk/ (C−1C), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalueris simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value asr. The same claim is true for them, but requires more work. Given positive (or more generally irreducible non-negative matrix)A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector forA. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (λ,y) forA, such that vectoryis positive, and given (r,x), wherex– is the left Perron–Frobenius eigenvector forA(i.e. eigenvector forAT), thenrxTy= (xTA)y=xT(Ay) =λxTy, alsoxTy> 0, so one has:r=λ. Since the eigenspace for the Perron–Frobenius eigenvalueris one-dimensional, non-negative eigenvectoryis a multiple of the Perron–Frobenius one.[31] Given a positive (or more generally irreducible non-negative matrix)A, one defines the functionfon the set of all non-negative non-zero vectorsxsuch thatf(x)is the minimum value of [Ax]i/xitaken over all thoseisuch thatxi≠ 0. Thenfis a real-valued function, whosemaximumis the Perron–Frobenius eigenvaluer. For the proof we denote the maximum offby the valueR. The proof requires to showR = r. Inserting the Perron-Frobenius eigenvectorvintof, we obtainf(v) = rand concluder ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vectorxand letξ=f(x). The definition offgives0 ≤ ξx ≤ Ax(componentwise). Now, we use the positive right eigenvectorwforAfor the Perron-Frobenius eigenvaluer, thenξ wTx = wTξx ≤ wT(Ax) = (wTA)x = r wTx. Hencef(x) = ξ ≤ r, which impliesR ≤ r.[32] LetAbe a positive (or more generally, primitive) matrix, and letrbe its Perron–Frobenius eigenvalue. HencePis aspectral projectionfor the Perron–Frobenius eigenvaluer, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrixMsuch that there exists an eigenvaluerwhich is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristicpolynomial. (These requirements hold for primitive matrices as above). Given thatMis diagonalizable,Mis conjugate to a diagonal matrix with eigenvaluesr1, ... ,rnon the diagonal (denoter1=r). The matrixMk/rkwill be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), fork → ∞, so the limit exists. The same method works for generalM(without assuming thatMis diagonalizable). The projection and commutativity properties are elementary corollaries of the definition:MMk/rk=Mk/rkM;P2= limM2k/r2k=P. The third fact is also elementary:M(Pu) =MlimMk/rku= limrMk+1/rk+1u, so taking the limit yieldsM(Pu) =r(Pu), so image ofPlies in ther-eigenspace forM, which is one-dimensional by the assumptions. Denoting byv,r-eigenvector forM(bywforMT). Columns ofPare multiples ofv, because the image ofPis spanned by it. Respectively, rows ofw. SoPtakes a form(a v wT), for somea. Hence its trace equals to(a wTv). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees thatPacts identically on ther-eigenvector forM. So it is one-dimensional. So choosing (wTv) = 1, impliesP=vwT. For any non-negative matrixAits Perron–Frobenius eigenvaluersatisfies the inequality: This is not specific to non-negative matrices: for any matrixAwith an eigenvalueλ{\displaystyle \scriptstyle \lambda }it is true that|λ|≤maxi∑j|aij|{\displaystyle \scriptstyle |\lambda |\;\leq \;\max _{i}\sum _{j}|a_{ij}|}. This is an immediate corollary of theGershgorin circle theorem. However another proof is more direct: Anymatrix induced normsatisfies the inequality‖A‖≥|λ|{\displaystyle \scriptstyle \|A\|\geq |\lambda |}for any eigenvalueλ{\displaystyle \scriptstyle \lambda }because, ifx{\displaystyle \scriptstyle x}is a corresponding eigenvector,‖A‖≥|Ax|/|x|=|λx|/|x|=|λ|{\displaystyle \scriptstyle \|A\|\geq |Ax|/|x|=|\lambda x|/|x|=|\lambda |}. Theinfinity normof a matrix is the maximum of row sums:‖A‖∞=max1≤i≤m∑j=1n|aij|.{\displaystyle \scriptstyle \left\|A\right\|_{\infty }=\max \limits _{1\leq i\leq m}\sum _{j=1}^{n}|a_{ij}|.}Hence the desired inequality is exactly‖A‖∞≥|λ|{\displaystyle \scriptstyle \|A\|_{\infty }\geq |\lambda |}applied to the non-negative matrixA. Another inequality is: This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given thatAis positive (not just non-negative), then there exists a positive eigenvectorwsuch thatAw=rwand the smallest component ofw(saywi) is 1. Thenr= (Aw)i≥ the sum of the numbers in rowiofA. Thus the minimum row sum gives a lower bound forrand this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via theCollatz-Wielandt formula. One takes the vectorx= (1, 1, ..., 1) and immediately obtains the inequality. The proof now proceeds usingspectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that ifAis an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also ifPis its Perron projection thenAP=PA= ρ(A)Pso every column ofPis a positive right eigenvector ofAand every row is a positive left eigenvector. Moreover, ifAx= λxthenPAx= λPx= ρ(A)Pxwhich meansPx= 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). IfAis a primitive matrix with ρ(A) = 1 then it can be decomposed asP⊕ (1 −P)Aso thatAn=P+ (1 −P)An. Asnincreases the second of these terms decays to zero leavingPas the limit ofAnasn→ ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. Ifvandware the positive row and column vectors that it generates then the Perron projection is justwv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. The analysis whenAis irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 −P)Adecaying as in the primitive case whenever ρ(A) = 1. So we consider theperipheral projection, which is the spectral projection ofAcorresponding to all the eigenvalues that have modulusρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. Suppose in addition that ρ(A) = 1 andAhasheigenvalues on the unit circle. IfPis the peripheral projection then the matrixR=AP=PAis non-negative and irreducible,Rh=P, and the cyclic groupP,R,R2, ....,Rh−1represents the harmonics ofA. The spectral projection ofAat the eigenvalue λ on the unit circle is given by the formulah−1∑1hλ−kRk{\displaystyle \scriptstyle h^{-1}\sum _{1}^{h}\lambda ^{-k}R^{k}}. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition ofAis given byA=R⊕ (1 −P)Aso the difference betweenAnandRnisAn−Rn= (1 −P)Anrepresenting the transients ofAnwhich eventually decay to zero.Pmay be computed as the limit ofAnhasn→ ∞. The matricesL=(100100111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\1&1&1\end{smallmatrix}}\right)},P=(100100−111){\displaystyle \left({\begin{smallmatrix}1&0&0\\1&0&0\\\!-1&1&1\end{smallmatrix}}\right)},T=(011101110){\displaystyle \left({\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}}\right)},M=(0100010000000100000100100){\displaystyle \left({\begin{smallmatrix}0&1&0&0&0\\1&0&0&0&0\\0&0&0&1&0\\0&0&0&0&1\\0&0&1&0&0\end{smallmatrix}}\right)}provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections ofLare both equal toP, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrixTis an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false.Mis an example of a matrix with several missing spectral teeth. If ω = eiπ/3then ω6= 1 and the eigenvalues ofMare {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5are both absent. More precisely, sinceMis block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one[citation needed] A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the termsstrictly positiveandpositiveto mean > 0 and ≥ 0 respectively. In this articlepositivemeans > 0 andnon-negativemeans ≥ 0. Another vexed area concernsdecomposabilityandreducibility:irreducibleis an overloaded term. For avoidance of doubt a non-zero non-negative square matrixAsuch that 1 +Ais primitive is sometimes said to beconnected. Then irreducible non-negative square matrices and connected matrices are synonymous.[33] The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of aprobability distributionand is sometimes called astochastic eigenvector. Perron–Frobenius eigenvalueanddominant eigenvalueare alternative names for the Perron root. Spectral projections are also known asspectral projectorsandspectral idempotents. The period is sometimes referred to as theindex of imprimitivityor theorder of cyclicity.
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem
Head/tail breaksis aclustering algorithmfor data with aheavy-tailed distributionsuch aspower lawsandlognormal distributions. The heavy-tailed distribution can be simply referred to the scaling pattern of far more small things than large ones, or alternatively numerous smallest, a very few largest, and some in between the smallest and largest. The classification is done through dividing things into large (or called the head) and small (or called the tail) things around the arithmetic mean or average, and then recursively going on for the division process for the large things or the head until the notion of far more small things than large ones is no longer valid, or with more or less similar things left only.[1]Head/tail breaks is not just for classification, but also for visualization of big data by keeping the head, since the head is self-similar to the whole. Head/tail breaks can be applied not only to vector data such as points, lines and polygons, but also to raster data like digital elevation model (DEM). The head/tail breaks is motivated by inability of conventional classification methods such as equal intervals, quantiles, geometric progressions, standard deviation, and natural breaks - commonly known asJenks natural breaks optimizationork-means clusteringto reveal the underlying scaling or living structure with the inherent hierarchy (or heterogeneity) characterized by the recurring notion of far more small things than large ones.[2][3]Note that the notion of far more small things than large one is not only referred to geometric property, but also to topological and semantic properties. In this connection, the notion should be interpreted as far more unpopular (or less-connected) things than popular (or well-connected) ones, or far more meaningless things than meaningful ones. Head/tail breaks uses the mean or average to dichotomize a dataset into small and large values, rather than to characterize classes by average values, which is unlike k-means clustering or natural breaks. Through the head/tail breaks, a dataset is seen as a living structure with an inherent hierarchy with far more smalls than larges, or recursively perceived as the head of the head of the head and so on. It opens up new avenues of analyzing data from a holistic and organic point of view while considering different types of scales and scaling in spatial analysis.[4] Given some variable X that demonstrates a heavy-tailed distribution, there are far more small x than large ones. Take the average of all xi, and obtain the first mean m1. Then calculate the second mean for those xi greater than m1, and obtain m2. In the same recursive way, we can get m3 depending on whether the ending condition of no longer far more small x than large ones is met. For simplicity, we assume there are three means, m1, m2, and m3. This classification leads to four classes: [minimum, m1], (m1, m2], (m2, m3], (m3, maximum]. In general, it can be represented as a recursive function as follows: The resulting number of classes is referred to as ht-index, an alternative index tofractal dimensionfor characterizing complexity of fractals or geographic features: the higher the ht-index, the more complex the fractals.[5] The criterion to stop the iterative classification process using the head/tail breaks method is that the remaining data (i.e., the head part) are not heavy-tailed, or simply, the head part is no longer a minority (i.e., the proportion of the head part is no longer less than a threshold such as 40%). This threshold is suggested to be 40% by Jiang et al. (2013),[6]just as the codes above (i.e., (length/head)/length(data) ≤ 40%). This process is called head/tail breaks 1.0. But sometimes a larger threshold, for example 50% or more, can be used, as Jiang and Yin (2014)[5]noted in another article: "this condition can be relaxed for many geographic features, such as 50 percent or even more". However, all heads' percentage on average must be smaller than 40% (or 41, 42%), indicating far more small things than large ones. Many real-world data cannot be fit into a perfect long tailed distribution, therefore its threshold can be relaxed structurally. In head/tail breaks 2.0 the threshold only applies to the overall heads' percentage.[7]This means that the percentages of all heads related to the tails should be around 40% on average. Individual classes can have any percentage spit around the average, as long as this averages out as a whole. For example, if there is data distributed in such a way that it has a clearly defined head and tail during the first and second iteration (length(head)/(length(data)<20%) but a much less well defined long tailed distribution for the third iteration (60% in the head), head/tail breaks 2.0 allows the iteration to continue into the fourth iteration which can be distributed 30% head - 70% tail again and so on. As long as the overall threshold is not surpassed the head/tail breaks classification holds. A good tool to display the scaling pattern, or the heavy-tailed distribution, is the rank-size plot, which is a scatter plot to display a set of values according to their ranks. With this tool, a new index[8]termed as the ratio of areas (RA) in a rank-size plot was defined to characterize the scaling pattern. The RA index has been successfully used in the estimation of traffic conditions. However, the RA index can only be used as a complementary method to the ht-index, because it is ineffective to capture the scaling structure of geographic features. In addition to the ht-index, the following indices are also derived with the head/tail breaks. Instead of more or less similar things, there are far more small things than large ones surrounding us. Given the ubiquity of the scaling pattern, head/tail breaks is found to be of use to statistical mapping, map generalization, cognitive mapping and even perception of beauty .[6][12][13]It helps visualize big data, since big data are likely to show the scaling property of far more small things than large ones. Essentially geographic phenomena can be scaleful or scale-free. Scaleful phenomena can be explained by conventional mathematical or geographical operations, but scale-free phenomena can not. Head/tail breaks can be used to characterize the scale-free phenomena, which are in the majority.[14]The visualization strategy is to recursively drop out the tail parts until the head parts are clear or visible enough.[15][16]In addition, it helps delineate cities or natural cities to be more precise from various geographic information such as street networks, social media geolocation data, and nighttime images. As the head/tail breaks method can be used iteratively to obtain head parts of a data set, this method actually captures the underlying hierarchy of the data set. For example, if we divide the array (19, 8, 7, 6, 2, 1, 1, 1, 0) with the head/tail breaks method, we can get two head parts, i.e., the first head part (19, 8, 7, 6) and the second head part (19). These two head parts as well as the original array form a three-level hierarchy: The number of levels of the above-mentioned hierarchy is actually a characterization of the imbalance of the example array, and this number of levels has been termed as the ht-index.[5]With the ht-index, we are able to compare degrees of imbalance of two data sets. For example, the ht-index of the example array (19, 8, 7, 6, 2, 1, 1, 1, 0) is 3, and the ht-index of another array (19, 8, 8, 8, 8, 8, 8, 8, 8) is 2. Therefore, the degree of imbalance of the former array is higher than that of the latter array. The use of fractals in modelling human geography has for a longer period been seen as useful in measuring the spatial distribution of human settlements.[17]Head/tail breaks can be used to do just that with a concept called natural cities. The term ‘natural cities’ refers to the human settlements or human activities in general on Earth's surface that are naturally or objectively defined and delineated from massive geographic information based on head/tail division rule, a non-recursive form of head/tail breaks.[18][19]Such geographic information could be from various sources, such as massive street junctions[19]and street ends, a massive number of street blocks, nighttime imagery and social media users’ locations etc. Based on these the different urban forms and configurations detected in cities can be derived.[20]Distinctive from conventional cities, the adjective ‘natural’ could be explained not only by the sources of natural cities, but also by the approach to derive them[1]. Natural cities are derived from a meaningful cutoff averaged from a massive number of units extracted from geographic information.[15]Those units vary according to different kinds of geographic information, for example the units could be area units for the street blocks and pixel values for the nighttime images.[21]Anatural cities modelhas been created using ArcGIS model builder,[22]it follows the same process of deriving natural cities from location-based social media,[18]namely, building up huge triangular irregular network (TIN) based on the point features (street nodes in this case) and regarding the triangles which are smaller than a mean value as the natural cities. These natural cities can also be created from other open access information likeOpenStreetMapand further be used as an alternative delineation of administrative boundaries.[23]Scaling lawcan also at the same time correctly be identified and the administrative borders can be created to respect this by the delineation of the natural cities.[24][25]This type methodology can help urban geographers and planners by correctly identifying the effective urban territorial scope of the areas they work in.[26] Natural cities can vary depending on the scale on which the natural cities are delineated, which is why optimally they have to be based on data from the whole world. Due to that being computationally impossible, a country or county scale is suggested as alternative.[27]Due to the scale-free nature of natural cities and the data they are based on there are also possibilities to use the natural cities method for further measurements. One of the main advantages of natural cities is that it is derivedbottom-upinstead oftop-down. That means that the borders are determined by the data of something physical rather than determined by an administrative government or administration.[28]For example by calculating the natural cities of a natural city recursively the dense areas within a natural city are identified. These can be seen as city centers for example. By using the natural cities method in this way further border delineations can be made dependent on the scale the natural cities were generated from.[29]Natural cities derived from smaller regional areas will provide less accurate but still usable results in certain analysis, like for example determining urban expansion over time.[30]As mentioned before though, optimally natural cities should be based on a massive amount of for example street intersections for an entire country or even the world. This is because natural cities are based onthe wisdom of crowdsthinking, which needs the biggest set of available data for the best results. Also note that the structure of natural cities can be considered to befractalin nature.[31] It is important when head/tail breaks are being used to generate natural cities, that the data is not aggregated afterwards. For example, the amount of generated natural cities can only be known after they are generated. It is not possible to use a pre-defined number of cities for an area or country and aggregate the results of the natural cities to administratively determined city borders. Naturally natural cities should followZipf's law, if they do not, the area is most likely too small, or data has probably been processed wrongly. An example of this is seen in a research where head/tail breaks were used to extract natural cities, but they were aggregated to administrative borders, which following that concluded that the cities do not followZipf's law.[32]This happens more often in science, where papers actually produce results which are actually false.[33] Current color renderings for DEM or density map are essentially based on conventional classifications such as natural breaks or equal intervals, so they disproportionately exaggerate high elevations or high densities. As a matter of fact, there are not so many high elevations or high-density locations.[34]It was found that coloring based head/tail breaks is more favorable than those by other classifications.[35][36][2] The pattern of far more small things than large ones frequently recurs in geographical data. A spiral layout inspired by the golden ratio or Fibonacci sequence can help visualize this recursive notion of scaling hierarchy and the different levels of scale.[37][38]In other words, from the smallest to the largest scale, a map can be seen as a map of a map of a map, and so on. Other applications of Head/tail breaks: The following implementations are available underFree/Open Source Softwarelicenses.
https://en.wikipedia.org/wiki/Head/tail_Breaks
TheHail Mary Cloudwas, or is, a password guessingbotnet, which used a statistical equivalent tobrute forcepasswordguessing. The botnet ran from possibly as early as 2005,[1]and certainly from 2007 until 2012 and possibly later. The botnet was named and documented by Peter N. M. Hansteen.[2] The principle is that a botnet can try several thousands of more likely passwords against thousands of hosts, rather than millions of passwords against one host. Since the attacks were widelydistributed, the frequency on a given server was low and was unlikely to trigger alarms.[2]Moreover, the attacks come from different members of the botnet, thus decreasing the effectiveness of bothIPbased detection and blocking.
https://en.wikipedia.org/wiki/Hail_Mary_Cloud
Incomputer science,dancing links(DLX) is a technique for adding and deleting a node from a circulardoubly linked list. It is particularly useful for efficiently implementingbacktrackingalgorithms, such asKnuth's Algorithm Xfor theexact cover problem.[1]Algorithm X is arecursive,nondeterministic,depth-first,backtrackingalgorithmthat finds all solutions to theexact coverproblem. Some of the better-known exact cover problems includetiling, thenqueens problem, andSudoku. The namedancing links, which was suggested byDonald Knuth, stems from the way the algorithm works, as iterations of the algorithm cause the links to "dance" with partner links so as to resemble an "exquisitely choreographed dance." Knuth credits Hiroshi Hitotsumatsu and Kōhei Noshita with having invented the idea in 1979,[2]but it is his paper which has popularized it. As the remainder of this article discusses the details of an implementation technique for Algorithm X, the reader is strongly encouraged to read theAlgorithm Xarticle first. The idea of DLX is based on the observation that in a circulardoubly linked listof nodes, will remove nodexfrom the list, while will restorex's position in the list, assuming that x.right and x.left have been left unmodified. This works regardless of the number of elements in the list, even if that number is 1. Knuth observed that a naive implementation of his Algorithm X would spend an inordinate amount of time searching for 1's. When selecting a column, the entire matrix had to be searched for 1's. When selecting a row, an entire column had to be searched for 1's. After selecting a row, that row and a number of columns had to be searched for 1's. To improve this search time fromcomplexityO(n) to O(1), Knuth implemented asparse matrixwhere only 1's are stored. At all times, each node in the matrix will point to the adjacent nodes to the left and right (1's in the same row), above and below (1's in the same column), and the header for its column (described below). Each row and column in the matrix will consist of a circular doubly-linked list of nodes. Each column will have a special node known as the "column header," which will be included in the column list, and will form a special row ("control row") consisting of all the columns which still exist in the matrix. Finally, each column header may optionally track the number of nodes in its column, so that locating a column with the lowest number of nodes is ofcomplexityO(n) rather than O(n×m) wherenis the number of columns andmis the number of rows. Selecting a column with a low node count is a heuristic which improves performance in some cases, but is not essential to the algorithm. In Algorithm X, rows and columns are regularly eliminated from and restored to the matrix. Eliminations are determined by selecting a column and a row in that column. If a selected column doesn't have any rows, the current matrix is unsolvable and must be backtracked. When an elimination occurs, all columns for which the selected row contains a 1 are removed, along with all rows (including the selected row) that contain a 1 in any of the removed columns. The columns are removed because they have been filled, and the rows are removed because they conflict with the selected row. To remove a single column, first remove the selected column's header. Next, for each row where the selected column contains a 1, traverse the row and remove it from other columns (this makes those rows inaccessible and is how conflicts are prevented). Repeat this column removal for each column where the selected row contains a 1. This order ensures that any removed node is removed exactly once and in a predictable order, so it can be backtracked appropriately. If the resulting matrix has no columns, then they have all been filled and the selected rows form the solution. To backtrack, the above process must be reversed using the second algorithm stated above. One requirement of using that algorithm is that backtracking must be done as an exact reversal of eliminations. Knuth's paper gives a clear picture of these relationships and how the node removal and reinsertion works, and provides a slight relaxation of this limitation. It is also possible to solve one-cover problems in which a particular constraint is optional, but can be satisfied no more than once. Dancing Links accommodates these with primary columns which must be filled and secondary columns which are optional. This alters the algorithm's solution test from a matrix having no columns to a matrix having no primary columns and if the heuristic of minimum one's in a column is being used then it needs to be checked only within primary columns. Knuth discusses optional constraints as applied to thenqueens problem. The chessboard diagonals represent optional constraints, as some diagonals may not be occupied. If a diagonal is occupied, it can be occupied only once.
https://en.wikipedia.org/wiki/Dancing_Links
TheUniversal Mobile Telecommunications System(UMTS) is a3Gmobile cellular system for networks based on theGSMstandard.[1]UMTS useswideband code-division multiple access(W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth tomobile network operatorscompared to previous2Gsystems likeGPRSandCSD.[2]UMTS on its provides a peak theoretical data rate of 2Mbit/s.[3] Developed and maintained by the3GPP(3rd Generation Partnership Project), UMTS is a component of theInternational Telecommunication UnionIMT-2000standard set and compares with theCDMA2000standard set for networks based on the competingcdmaOnetechnology. The technology described in UMTS is sometimes also referred to asFreedom of Mobile Multimedia Access(FOMA)[4]or 3GSM. UMTS specifies a complete network system, which includes theradio access network(UMTS Terrestrial Radio Access Network, or UTRAN), thecore network(Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. UnlikeEDGE(IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. UMTS has since been enhanced asHigh Speed Packet Access(HSPA).[5] UMTS supports theoretical maximum datatransfer ratesof 42Mbit/swhenEvolved HSPA(HSPA+) is implemented in the network.[6]Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s forHigh-Speed Downlink Packet Access(HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels inHigh-Speed Circuit-Switched Data(HSCSD) and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as3.5G. Currently, HSDPA enablesdownlinktransfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with theHigh-Speed Uplink Packet Access(HSUPA). The 3GPPLTEstandard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based uponorthogonal frequency-division multiplexing. The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV andvideo calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web – either directly on a handset or connected to a computer viaWi-Fi,BluetoothorUSB.[citation needed] UMTS combines three different terrestrialair interfaces,GSM'sMobile Application Part(MAP) core, and the GSM family ofspeech codecs. The air interfaces are called UMTS Terrestrial Radio Access (UTRA).[7]All air interface options are part ofITU'sIMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network. Please note that the termsW-CDMA,TD-CDMAandTD-SCDMAare misleading. While they suggest covering just achannel access method(namely a variant ofCDMA), they are actually the common names for the whole air interface standards.[8] W-CDMA (WCDMA; WidebandCode-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in3Gmobile telecommunicationsnetworks. It supports conventional cellular voice, text andMMSservices, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access.[9] W-CDMA uses theDS-CDMAchannel access method with a pair of 5 MHz wide channels. In contrast, the competingCDMA2000system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States). The specificfrequency bandsoriginally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used.[10]While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US byAT&T Mobility, New Zealand byTelecom New Zealandon theXT Mobile Networkand in Australia byTelstraon theNext Gnetwork. Some carriers such asT-Mobileuse band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz). UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) –frequency-division duplexing(FDD) and a3GPPstandardizedversion of UMTS networks that makes use of frequency-division duplexing forduplexingover an UMTS Terrestrial Radio Access (UTRA) air interface.[11] W-CDMA is the basis of Japan'sNTT DoCoMo'sFOMAservice and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS.[12]It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously usedtime-division multiple access(TDMA) andtime-division duplex(TDD) schemes. While not an evolutionary upgrade on the airside, it uses the samecore networkas the2GGSM networks deployed worldwide, allowingdual-mode mobileoperation along with GSM/EDGE; a feature it shares with other members of the UMTS family. In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G networkFOMA. Later NTT DoCoMo submitted the specification to theInternational Telecommunication Union(ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short rangeDECTsystem. Later, W-CDMA was selected as an air interface forUMTS. As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS.[13]However, this has been resolved by NTT DoCoMo updating their network. Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated byQualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its earlyIS-95air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries as of 2006[update]. However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012.[14]Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path toLTE. Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard. W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use adirect-sequenceCDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density[citation needed]; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, andcross-licensingofpatentsbetween Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents.[15] W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements. The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo inJapanin 2001. Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. W-CDMA has also been adapted for use in satellite communications on the U.S.Mobile User Objective Systemusing geosynchronous satellites in place of cell towers. J-PhoneJapan (onceVodafoneand nowSoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004. Beginning in 2003,Hutchison Whampoagradually launched their upstart UMTS networks. Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" – asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service – where a certain "coverage" must be achieved within a given date or the licence will be revoked. Vodafone launched several UMTS networks in Europe in February 2004.MobileOneofSingaporecommercially launched its 3G (W-CDMA) services in February 2005.New Zealandin August 2005 andAustraliain October 2005. AT&T Mobilityutilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022. Rogers inCanadaMarch 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007. TeliaSoneraopened W-CDMA service inFinlandOctober 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB.[citation needed] SK TelecomandKTF, two largest mobile phone service providers inSouth Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities whileSK Telecomhas announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007.KT Freecelwill thus cut funding to its CDMA2000 network development to the minimum. InNorway,Telenorintroduced W-CDMA in major cities by the end of 2004, while their competitor,NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006). Maxis CommunicationsandCelcom, two mobile phone service providers inMalaysia, started offering W-CDMA services in 2005. InSweden,Teliaintroduced W-CDMA in March 2004. UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) – time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD.[11]UTRA-TDD is a UTRA that usestime-division duplexingfor duplexing.[11]While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those whereWiMAXmight be used.[citation needed]UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used.[citation needed]It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD).[16][17] The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods,code-division multiple access(CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic. TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on usingspread-spectrummultiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[16] UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5MHzof spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second).[16]The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tightfrequency bands.[18] TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA.[19] In the United States, the technology has been used for public safety and government use in theNew York Cityand a few other areas.[needs update][20]In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started. Time-Division Synchronous Code-Division Multiple Access(TD-SCDMA) or UTRA TDD 1.28Mcpslow chip rate (UTRA-TDD LCR)[17][8]is an air interface[17]found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA. TD-SCDMA uses the TDMA channel access method combined with an adaptivesynchronous CDMAcomponent[17]on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification. Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000. The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification.[8] TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks. TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT),Datang TelecomandSiemensin an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders. TD-SCDMA proponents also claim it is better suited for densely populated areas.[17]Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells.[17] TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005[21]but only reached large scale commercial trials with 60,000 users across eight cities in 2008.[22] On January 7, 2009, China granted a TD-SCDMA 3G licence toChina Mobile.[23] On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009. TD-SCDMA is not commonly used outside of China.[24] TD-SCDMA uses TDD, in contrast to the FDD scheme used byW-CDMA. By dynamically adjusting the number of timeslots used for downlink anduplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and thebase stationcan deduce the downlink channel information from uplink channel estimates, which is helpful to the application ofbeamformingtechniques. TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity ofmultiuser detectionand beamforming schemes, but the non-continuous transmission also reducescoverage(because of the higher peak power needed), mobility (because of lowerpower controlfrequency) and complicatesradio resource managementalgorithms. The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces theinterferencebetween users of the same timeslot using different codes by improving theorthogonalitybetween the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization. On January 20, 2006,Ministry of Information Industryof the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers,China TelecomandChina Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008. The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option".[17] On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch. In January 2009, theMinistry of Industry and Information Technology(MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA andCDMA2000 1xEV-DO, were assigned toChina Unicomand China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth. The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network.[25]Deployment of base stations has also been slow, resulting in lack of improvement of service for users.[26]The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE.[27][28]Gradual closures of TD-SCDMA stations started in 2016.[29][30] The following is a list ofmobile telecommunicationsnetworks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology. In Europe,CEPTallocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use.[33]Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD,[34]due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band. Ordinary UMTS uses UTRA-FDD as an air interface and is known asUMTS-FDD. UMTS-FDD uses W-CDMA for multiple access andfrequency-division duplexfor duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for1G,2G, or 3G mobile telephone service in the countries of operation. UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response. UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed oncellular,PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum. Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 2500–2690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz.[35] UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA. Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York,[36]but outside of some experimental systems, notably one fromNextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system. A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX andHIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulatorOFCOMis discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure. Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower. Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter. UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands. UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network toGERAN(GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN. UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000. The UE (User Equipment) interface of theRAN(Radio Access Network) primarily consists ofRRC(Radio Resource Control),PDCP(Packet Data Convergence Protocol),RLC(Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three Modes – Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header toSDUof higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters. The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC andNASmessages go on SRBs. Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs. With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high. The CN can be connected to variousbackbone networks, such as theInternetor anIntegrated Services Digital Network(ISDN) telephone network. UMTS (and GERAN) include the three lowest layers ofOSI model. The network layer (OSI 3) includes theRadio Resource Managementprotocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers. A UARFCN (abbreviationfor UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in theUMTS frequency bands. Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels. Over 130 licenses had been awarded to operators worldwide, as of December 2004, specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. InGermany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and theSonera/Telefónicaconsortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notablyKPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years.[needs update] The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink.[needs update] AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out[37]some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones. T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4GLTEservices.[38] In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providersWind Mobile,MobilicityandVideotronhave begun operations in the 1700 MHz band. In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded asNextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned byHutchison 3G Australia, and this is the primary network used by their customers.Optusis currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas.Vodafoneis also building a 3G network using the 900 MHz band. In India,BSNLhas started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber. Carriers in South America are now also rolling out 850 MHz networks. UMTS phones (and data cards) are highly portable – they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges. Most UMTS licensees consider ubiquitous, transparent globalroamingan important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bands – Europe initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world – including frequencies formerly used solely for 2G services. UMTS phones can use aUniversal Subscriber Identity Module, USIM (based on GSM'sSIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone. Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS network – using a pre-release specification,[39]it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS. All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world. Using acellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as atablet PCor aPDA). Some softwareinstalls itselffrom the modem, so that in some cases absolutely no knowledge of technology is required to getonlinein moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobileWLAN access point. There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones withPentaband3G coverage, including theN8andE7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple'siPhone 4contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed. The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum. Another competitor to UMTS isEDGE(IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation includingvertical handovers. China'sTD-SCDMAstandard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). UnlikeTD-CDMA(UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor. While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks. All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD. On the Internet access side, competing systems include WiMAX andFlash-OFDM. From a GSM/GPRS network, the following network elements can be reused: From a GSM/GPRS communication radio network, the following elements cannot be reused: They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network. The UMTS network introduces new network elements that function as specified by 3GPP: The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations. Some countries, including the United States, have allocated spectrum differently from theITUrecommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available.[citation needed]In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today[when?], standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.[40] In its early days[when?], UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor.[citation needed]The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue.[citation needed] Compared to GSM, UMTS networks initially required a higherbase stationdensity. For fully-fledged UMTS incorporatingvideo on demandfeatures, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006.[citation needed] Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks.Apple Inc.cited[41]UMTS power consumption as the reason that the first generationiPhoneonly supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing. As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information.[42]In August 2014, theWashington Postreported on widespread marketing of surveillance systems usingSignalling System No. 7(SS7) protocols to locate callers anywhere in the world.[42] In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers.[43] Deutsche Telekomand Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution.[44] The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones.
https://en.wikipedia.org/wiki/Universal_Mobile_Telecommunications_System
Thedominance-based rough set approach(DRSA) is an extension ofrough set theoryformulti-criteria decision analysis(MCDA), introduced by Greco, Matarazzo and Słowiński.[1][2][3]The main change compared to the classicalrough setsis the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration ofcriteriaandpreference-ordered decision classes. Multicriteria classification(sorting) is one of the problems considered withinMCDAand can be stated as follows: given a set of objects evaluated by a set ofcriteria(attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem ofclassification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to asordinal classification problem with monotonicity constraintsand often appears in real-life application whenordinalandmonotoneproperties follow from the domain knowledge about the problem. As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes:bad,mediumandgood(notice that classgoodis preferred tomediumandmediumis preferred tobad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible valuesbad,mediumandgood. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class). As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classessafeandrisky. This may involve such characteristics as "return on equity(ROE)", "return on investment(ROI)" and "return on sales(ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information inknowledge discoverymay lead to wrong conclusions. In DRSA, data are often presented using a particular form ofdecision table. Formally, a DRSA decision table is a 4-tupleS=⟨U,Q,V,f⟩{\displaystyle S=\langle U,Q,V,f\rangle }, whereU{\displaystyle U\,\!}is a finite set of objects,Q{\displaystyle Q\,\!}is a finite set of criteria,V=⋃q∈QVq{\displaystyle V=\bigcup {}_{q\in Q}V_{q}}whereVq{\displaystyle V_{q}\,\!}is the domain of the criterionq{\displaystyle q\,\!}andf:U×Q→V{\displaystyle f\colon U\times Q\to V}is aninformation functionsuch thatf(x,q)∈Vq{\displaystyle f(x,q)\in V_{q}}for every(x,q)∈U×Q{\displaystyle (x,q)\in U\times Q}. The setQ{\displaystyle Q\,\!}is divided intocondition criteria(setC≠∅{\displaystyle C\neq \emptyset }) and thedecision criterion(class)d{\displaystyle d\,\!}. Notice, thatf(x,q){\displaystyle f(x,q)\,\!}is an evaluation of objectx{\displaystyle x\,\!}on criterionq∈C{\displaystyle q\in C}, whilef(x,d){\displaystyle f(x,d)\,\!}is the class assignment (decision value) of the object. An example of decision table is shown in Table 1 below. It is assumed that the domain of a criterionq∈Q{\displaystyle q\in Q}is completelypreorderedby anoutranking relation⪰q{\displaystyle \succeq _{q}};x⪰qy{\displaystyle x\succeq _{q}y}means thatx{\displaystyle x\,\!}is at least as good as (outranks)y{\displaystyle y\,\!}with respect to the criterionq{\displaystyle q\,\!}. Without loss of generality, we assume that the domain ofq{\displaystyle q\,\!}is a subset ofreals,Vq⊆R{\displaystyle V_{q}\subseteq \mathbb {R} }, and that the outranking relation is a simple order between real numbers≥{\displaystyle \geq \,\!}such that the following relation holds:x⪰qy⟺f(x,q)≥f(y,q){\displaystyle x\succeq _{q}y\iff f(x,q)\geq f(y,q)}. This relation is straightforward for gain-type ("the more, the better") criterion, e.g.company profit. For cost-type ("the less, the better") criterion, e.g.product price, this relation can be satisfied by negating the values fromVq{\displaystyle V_{q}\,\!}. LetT={1,…,n}{\displaystyle T=\{1,\ldots ,n\}\,\!}. The domain of decision criterion,Vd{\displaystyle V_{d}\,\!}consist ofn{\displaystyle n\,\!}elements (without loss of generality we assumeVd=T{\displaystyle V_{d}=T\,\!}) and induces a partition ofU{\displaystyle U\,\!}inton{\displaystyle n\,\!}classesCl={Clt,t∈T}{\displaystyle {\textbf {Cl}}=\{Cl_{t},t\in T\}}, whereClt={x∈U:f(x,d)=t}{\displaystyle Cl_{t}=\{x\in U\colon f(x,d)=t\}}. Each objectx∈U{\displaystyle x\in U}is assigned to one and only one classClt,t∈T{\displaystyle Cl_{t},t\in T}. The classes are preference-ordered according to an increasing order of class indices, i.e. for allr,s∈T{\displaystyle r,s\in T}such thatr≥s{\displaystyle r\geq s\,\!}, the objects fromClr{\displaystyle Cl_{r}\,\!}are strictly preferred to the objects fromCls{\displaystyle Cl_{s}\,\!}. For this reason, we can consider theupward and downward unions of classes, defined respectively, as: We say thatx{\displaystyle x\,\!}dominatesy{\displaystyle y\,\!}with respect toP⊆C{\displaystyle P\subseteq C}, denoted byxDpy{\displaystyle xD_{p}y\,\!}, ifx{\displaystyle x\,\!}is better thany{\displaystyle y\,\!}on every criterion fromP{\displaystyle P\,\!},x⪰qy,∀q∈P{\displaystyle x\succeq _{q}y,\,\forall q\in P}. For eachP⊆C{\displaystyle P\subseteq C}, the dominance relationDP{\displaystyle D_{P}\,\!}isreflexiveandtransitive, i.e. it is apartial pre-order. GivenP⊆C{\displaystyle P\subseteq C}andx∈U{\displaystyle x\in U}, let representP-dominatingset andP-dominatedset with respect tox∈U{\displaystyle x\in U}, respectively. The key idea of therough setphilosophy is approximation of one knowledge by another knowledge. In DRSA, the knowledge being approximated is a collection of upward and downward unions of decision classes and the "granules of knowledge" used for approximation areP-dominating andP-dominated sets. TheP-lowerand theP-upper approximationofClt≥,t∈T{\displaystyle Cl_{t}^{\geq },t\in T}with respect toP⊆C{\displaystyle P\subseteq C}, denoted asP_(Clt≥){\displaystyle {\underline {P}}(Cl_{t}^{\geq })}andP¯(Clt≥){\displaystyle {\overline {P}}(Cl_{t}^{\geq })}, respectively, are defined as: Analogously, theP-lower and theP-upper approximation ofClt≤,t∈T{\displaystyle Cl_{t}^{\leq },t\in T}with respect toP⊆C{\displaystyle P\subseteq C}, denoted asP_(Clt≤){\displaystyle {\underline {P}}(Cl_{t}^{\leq })}andP¯(Clt≤){\displaystyle {\overline {P}}(Cl_{t}^{\leq })}, respectively, are defined as: Lower approximations group the objects whichcertainlybelong to class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}). This certainty comes from the fact, that objectx∈U{\displaystyle x\in U}belongs to the lower approximationP_(Clt≥){\displaystyle {\underline {P}}(Cl_{t}^{\geq })}(respectivelyP_(Clt≤){\displaystyle {\underline {P}}(Cl_{t}^{\leq })}), if no other object inU{\displaystyle U\,\!}contradicts this claim, i.e. every objecty∈U{\displaystyle y\in U}whichP-dominatesx{\displaystyle x\,\!}, also belong to the class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}). Upper approximations group the objects whichcould belongtoClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}), since objectx∈U{\displaystyle x\in U}belongs to the upper approximationP¯(Clt≥){\displaystyle {\overline {P}}(Cl_{t}^{\geq })}(respectivelyP¯(Clt≤){\displaystyle {\overline {P}}(Cl_{t}^{\leq })}), if there exist another objecty∈U{\displaystyle y\in U}P-dominated byx{\displaystyle x\,\!}from class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}). TheP-lower andP-upper approximations defined as above satisfy the following properties for allt∈T{\displaystyle t\in T}and for anyP⊆C{\displaystyle P\subseteq C}: TheP-boundaries(P-doubtful regions) ofClt≥{\displaystyle Cl_{t}^{\geq }}andClt≤{\displaystyle Cl_{t}^{\leq }}are defined as: The ratio defines thequality of approximationof the partitionCl{\displaystyle {\textbf {Cl}}\,\!}into classes by means of the set of criteriaP{\displaystyle P\,\!}. This ratio express the relation between all theP-correctly classified objects and all the objects in the table. Every minimal subsetP⊆C{\displaystyle P\subseteq C}such thatγP(Cl)=γC(Cl){\displaystyle \gamma _{P}(\mathbf {Cl} )=\gamma _{C}(\mathbf {Cl} )\,\!}is called areductofC{\displaystyle C\,\!}and is denoted byREDCl(P){\displaystyle RED_{\mathbf {Cl} }(P)}. A decision table may have more than one reduct. The intersection of all reducts is known as thecore. On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms ofdecision rules. The decision rules are expressions of the formif[condition]then[consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions. Certain rules has the following form: iff(x,q1)≥r1{\displaystyle f(x,q_{1})\geq r_{1}\,\!}andf(x,q2)≥r2{\displaystyle f(x,q_{2})\geq r_{2}\,\!}and…f(x,qp)≥rp{\displaystyle \ldots f(x,q_{p})\geq r_{p}\,\!}thenx∈Clt≥{\displaystyle x\in Cl_{t}^{\geq }} iff(x,q1)≤r1{\displaystyle f(x,q_{1})\leq r_{1}\,\!}andf(x,q2)≤r2{\displaystyle f(x,q_{2})\leq r_{2}\,\!}and…f(x,qp)≤rp{\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!}thenx∈Clt≤{\displaystyle x\in Cl_{t}^{\leq }} Possible rules has a similar syntax, however theconsequentpart of the rule has the form:x{\displaystyle x\,\!}could belong toClt≥{\displaystyle Cl_{t}^{\geq }}or the form:x{\displaystyle x\,\!}could belong toClt≤{\displaystyle Cl_{t}^{\leq }}. Finally, approximate rules has the syntax: iff(x,q1)≥r1{\displaystyle f(x,q_{1})\geq r_{1}\,\!}andf(x,q2)≥r2{\displaystyle f(x,q_{2})\geq r_{2}\,\!}and…f(x,qk)≥rk{\displaystyle \ldots f(x,q_{k})\geq r_{k}\,\!}andf(x,qk+1)≤rk+1{\displaystyle f(x,q_{k+1})\leq r_{k+1}\,\!}andf(x,qk+2)≤rk+2{\displaystyle f(x,q_{k+2})\leq r_{k+2}\,\!}and…f(x,qp)≤rp{\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!}thenx∈Cls∪Cls+1∪Clt{\displaystyle x\in Cl_{s}\cup Cl_{s+1}\cup Cl_{t}} The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table. Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes). A set of decision rules iscompleteif it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We callminimaleach set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete. One of three induction strategies can be adopted to obtain a set of decision rules:[4] The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM,[5]which generates minimal set of rules. Consider the following problem of high school students’ evaluations: Each object (student) is described by three criteriaq1,q2,q3{\displaystyle q_{1},q_{2},q_{3}\,\!}, related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes:Cl1={bad}{\displaystyle Cl_{1}=\{bad\}},Cl2={medium}{\displaystyle Cl_{2}=\{medium\}}andCl3={good}{\displaystyle Cl_{3}=\{good\}}. Thus, the following unions of classes were approximated: Notice that evaluations of objectsx4{\displaystyle x_{4}\,\!}andx6{\displaystyle x_{6}\,\!}are inconsistent, becausex4{\displaystyle x_{4}\,\!}has better evaluations on all three criteria thanx6{\displaystyle x_{6}\,\!}but worse global score. Therefore, lower approximations of class unions consist of the following objects: Thus, only classesCl1≤{\displaystyle Cl_{1}^{\leq }}andCl2≥{\displaystyle Cl_{2}^{\geq }}cannot be approximated precisely. Their upper approximations are as follows: while their boundary regions are: Of course, sinceCl2≤{\displaystyle Cl_{2}^{\leq }}andCl3≥{\displaystyle Cl_{3}^{\geq }}are approximated precisely, we haveP¯(Cl2≤)=Cl2≤{\displaystyle {\overline {P}}(Cl_{2}^{\leq })=Cl_{2}^{\leq }},P¯(Cl3≥)=Cl3≥{\displaystyle {\overline {P}}(Cl_{3}^{\geq })=Cl_{3}^{\geq }}andBnP(Cl2≤)=BnP(Cl3≥)=∅{\displaystyle Bn_{P}(Cl_{2}^{\leq })=Bn_{P}(Cl_{3}^{\geq })=\emptyset } The following minimal set of 10 rules can be induced from the decision table: The last rule is approximate, while the rest are certain. The other two problems considered withinmulti-criteria decision analysis,multicriteria choiceandrankingproblems, can also be solved using dominance-based rough set approach. This is done by converting the decision table intopairwise comparison table(PCT).[1] The definitions of rough approximations are based on a strict application of the dominance principle. However, when defining non-ambiguous objects, it is reasonable to accept a limited proportion of negative examples, particularly for large decision tables. Such extended version of DRSA is calledVariable-Consistency DRSAmodel (VC-DRSA)[6] In real-life data, particularly for large datasets, the notions of rough approximations were found to be excessively restrictive. Therefore, an extension of DRSA, based on stochastic model (Stochastic DRSA), which allows inconsistencies to some degree, has been introduced.[7]Having stated the probabilistic model for ordinal classification problems with monotonicity constraints, the concepts of lower approximations are extended to the stochastic case. The method is based on estimating the conditional probabilities using the nonparametricmaximum likelihoodmethod which leads to the problem ofisotonic regression. Stochastic dominance-based rough sets can also be regarded as a sort of variable-consistency model. 4eMka2Archived2007-09-09 at theWayback Machineis adecision support systemfor multiple criteria classification problems based on dominance-based rough sets (DRSA).JAMMArchived2007-07-19 at theWayback Machineis a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on theLaboratory of Intelligent Decision Support Systems (IDSS)website.
https://en.wikipedia.org/wiki/Dominance-based_rough_set_approach
SPARQL(pronounced "sparkle", arecursive acronym[2]forSPARQL Protocol and RDF Query Language) is anRDF query language—that is, asemanticquery languagefordatabases—able to retrieve and manipulate data stored inResource Description Framework (RDF)format.[3][4]It was made a standard by theRDF Data Access Working Group(DAWG) of theWorld Wide Web Consortium, and is recognized as one of the key technologies of thesemantic web. On 15 January 2008, SPARQL 1.0 was acknowledged byW3Cas an official recommendation,[5][6]and SPARQL 1.1 in March, 2013.[7] SPARQL allows for a query to consist oftriple patterns,conjunctions,disjunctions, and optionalpatterns.[10] Implementations for multipleprogramming languagesexist.[11]There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer.[12]In addition, tools exist to translate SPARQL queries to other query languages, for example toSQL[13]and toXQuery.[14] SPARQL allows users to write queries that follow theRDFspecification of theW3C. Thus, the entire dataset is "subject-predicate-object" triples. Subjects and predicates are always URI identifiers, but objects can be URIs or literal values. This single physical schema of 3 "columns" is hypernormalized in that what would be 1 relational record with (for example) 4 columns is now 4 triples with the subject being repeated over and over, the predicate essentially being the column name, and the object being the column value. Although this seems unwieldy, the SPARQL syntax offers these features: 1. Subjects and Objects can be used to find the other including transitively. Below is a set of triples. It should be clear thatex:sw001andex:sw002link toex:sw003, which itself has links: In SPARQL, the first time a variable is encountered in the expression pipeline, it is populated with result. The second and subsequent times it is seen, it is used as an input. If we assign ("bind") the URIex:sw003to the?targetsvariable, then it drives a result into?src; this tells us all the things that linktoex:sw003(upstream dependency): But with a simple switch of the binding variable, the behavior is reversed. This will produce all the things upon whichex:sw003depends (downstream dependency): Even more attractive is that we can easily instruct SPARQL to transitively follow the path: Bound variables can therefore also be lists and will be operated upon without complicated syntax. The effect of this is similar to the followingpseudocode: 2. SPARQL expressions are a pipeline Unlike SQL which has subqueries and CTEs, SPARQL is much more like MongoDB or SPARK. Expressions are evaluated exactly in the order they are declared including filtering and joining of data. The programming model becomes what a SQL statement would be like with multiple WHERE clauses. The combination of list-aware subjects and objects plus a pipeline approach can yield extremely expressive queries spanning many different domains of data. JOIN as used in RDBMS and understanding the dynamics of the JOIN (e.g. what column in what table is suitable to join to another, inner vs. outer, etc.) is not relevant in SPARQL (and in some ways simpler) because objects, if an URI and not a literal, implicity can be usedonlyto find a subject. Here is a more comprehensive example that illustrates the pipeline using some syntax shortcuts. Unlike relational databases, the object column is heterogeneous: the object data type, if not an URI, is usually implied (or specified in theontology) by thepredicatevalue. Literal nodes carry type information consistent with the underlying XSD namespace including signed and unsigned short and long integers, single and double precision floats, datetime, penny-precise decimal, Boolean, and string. Triple store implementations on traditional relational databases will typically store the value as a string and a fourth column will identify the real type. Polymorphic databases such as MongoDB and SQLite can store the native value directly into the object field. Thus, SPARQL provides a full set of analytic query operations such asJOIN,SORT,AGGREGATEfor data whoseschemais intrinsically part of the data rather than requiring a separate schema definition. However, schema information (the ontology) is often provided externally, to allow joining of differentdatasetsunambiguously. In addition, SPARQL provides specificgraphtraversal syntax for data that can be thought of as a graph. The example below demonstrates a simple query that leverages theontologydefinitionfoaf("friend of a friend"). Specifically, the following query returns names and emails of every person in thedataset: This query joins all of the triples with a matching subject, where the type predicate, "a", is a person (foaf:Person), and the person has one or more names (foaf:name) and mailboxes (foaf:mbox). For the sake of readability, the author of this query chose to reference the subject using the variable name "?person". Since the first element of the triple is always the subject, the author could have just as easily used any variable name, such as "?subj" or "?x". Whatever name is chosen, it must be the same on each line of the query to signify that the query engine is to join triples with the same subject. The result of the join is a set of rows –?person,?name,?email. This query returns the?nameand?emailbecause?personis often a complex URI rather than a human-friendly string. Note that any?personmay have multiple mailboxes, so in the returned set, a?namerow may appear multiple times, once for each mailbox, duplicating the?name. An important consideration in SPARQL is that when lookup conditions are not met in the pipeline for terminal entities like?email, then thewhole row is excluded, unlike SQL where typically a null column is returned. The query above will return only those?personwhere both at least one?nameand at least one?emailcan be found. If a?personhad no email, they would be excluded. To align the output with that expected from an equivalent SQL query, theOPTIONALkeyword is required: This query can be distributed to multiple SPARQL endpoints (services that accept SPARQL queries and return results), computed, and results gathered, a procedure known asfederated query. Whether in a federated manner or locally, additional triple definitions in the query could allow joins to different subject types, such as automobiles, to allow simple queries, for example, to return a list of names and emails for people who drive automobiles with a high fuel efficiency. In the case of queries that read data from the database, the SPARQL language specifies four different query variations for different purposes. Each of these query forms takes aWHEREblock to restrict the query, although, in the case of theDESCRIBEquery, theWHEREis optional. SPARQL 1.1 specifies a language for updating the database with several new query forms.[15] Another SPARQL query example that models the question "What are all the country capitals in Africa?": Variables are indicated by a?or$prefix. Bindings for?capitaland the?countrywill be returned. When a triple ends with a semicolon, the subject from this triple will implicitly complete the following pair to an entire triple. So for exampleex:isCapitalOf ?yis short for?x ex:isCapitalOf ?y. The SPARQL query processor will search for sets of triples that match these four triple patterns, binding the variables in the query to the corresponding parts of each triple. Important to note here is the "property orientation" (class matches can be conducted solely through class-attributes or properties – seeDuck typing). To make queries concise, SPARQL allows the definition of prefixes and baseURIsin a fashion similar toTurtle. In this query, the prefix "ex" stands for “http://example.com/exampleOntology#”. SPARQL has native dateTime operations as well. Here is a query that will return all pieces of software where the EOL date is greater than or equal to 1000 days from the release date and the release year is 2020 or greater: GeoSPARQLdefines filter functions forgeographic information system(GIS) queries using well-understood OGC standards (GML,WKT, etc.). SPARULis another extension to SPARQL. It enables the RDF store to be updated with this declarative query language, by addingINSERTandDELETEmethods. XSPARQLis an integrated query language combiningXQuerywith SPARQL to query both XML and RDF data sources at once.[16] Open source, reference SPARQL implementations SeeList of SPARQL implementationsfor more comprehensive coverage, includingtriplestore,APIs, and other storages that have implemented the SPARQL standard.
https://en.wikipedia.org/wiki/SPARQL
Inductive logic programming(ILP) is a subfield ofsymbolic artificial intelligencewhich useslogic programmingas a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers tophilosophical(i.e. suggesting a theory to explain observed facts) rather thanmathematical(i.e. proving a property for all members of a well-ordered set) induction. Given an encoding of the known background knowledge and a set of examples represented as a logicaldatabaseof facts, an ILP system will derive a hypothesised logic program whichentailsall the positive and none of the negative examples. Inductive logic programming is particularly useful inbioinformaticsandnatural language processing. Building on earlier work onInductive inference,Gordon Plotkinwas the first to formalise induction in aclausalsetting around 1970, adopting an approach of generalising from examples.[1][2]In 1981,Ehud Shapirointroduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples.[1][3]His first implementation was theModel Inference Systemin 1981:[4][5]aPrologprogram that inductively inferredHorn clauselogic programs from positive and negative examples.[1]The termInductive Logic Programmingwas first introduced in a paper byStephen Muggletonin 1990, defined as the intersection of machine learning and logic programming.[1]Muggleton and Wray Buntine introduced predicate invention andinverse resolutionin 1988.[1][6] Several inductive logic programming systems that proved influential appeared in the early 1990s.FOIL, introduced byRoss Quinlanin 1990[7]was based on upgradingpropositionallearning algorithmsAQandID3.[8]Golem, introduced by Muggleton and Feng in 1990, went back to a restricted form of Plotkin's least generalisation algorithm.[8][9]TheProgolsystem, introduced by Muggleton in 1995, first implemented inverse entailment, and inspired many later systems.[8][10][11]Aleph, a descendant of Progol introduced by Ashwin Srinivasan in 2001, is still one of the most widely used systems as of 2022[update].[10] At around the same time, the first practical applications emerged, particularly inbioinformatics, where by 2000 inductive logic programming had been successfully applied to drug design, carcinogenicity and mutagenicity prediction, and elucidation of the structure and function of proteins.[12]Unlike the focus onautomatic programminginherent in the early work, these fields used inductive logic programming techniques from a viewpoint ofrelational data mining. The success of those initial applications and the lack of progress in recovering larger traditional logic programs shaped the focus of the field.[13] Recently, classical tasks from automated programming have moved back into focus, as the introduction of meta-interpretative learning makes predicate invention and learning recursive programs more feasible. This technique was pioneered with theMetagolsystem introduced by Muggleton, Dianhuan Lin, Niels Pahlavi and Alireza Tamaddoni-Nezhad in 2014.[14]This allows ILP systems to work with fewer examples, and brought successes in learning string transformation programs, answer set grammars and general algorithms.[15] Inductive logic programming has adopted several different learning settings, the most common of which are learning fromentailmentand learning from interpretations.[16]In both cases, the input is provided in the form ofbackground knowledgeB, a logical theory (commonly in the form ofclausesused inlogic programming), as well as positive and negative examples, denotedE+{\textstyle E^{+}}andE−{\textstyle E^{-}}respectively. The output is given as ahypothesisH, itself a logical theory that typically consists of one or more clauses. The two settings differ in the format of examples presented. As of 2022[update], learning from entailment is by far the most popular setting for inductive logic programming.[16]In this setting, thepositiveandnegativeexamples are given as finite setsE+{\textstyle E^{+}}andE−{\textstyle E^{-}}of positive and negatedgroundliterals, respectively. Acorrect hypothesisHis a set of clauses satisfying the following requirements, where the turnstile symbol⊨{\displaystyle \models }stands forlogical entailment:[16][17][18]Completeness:B∪H⊨E+Consistency:B∪H∪E−⊭false{\displaystyle {\begin{array}{llll}{\text{Completeness:}}&B\cup H&\models &E^{+}\\{\text{Consistency: }}&B\cup H\cup E^{-}&\not \models &{\textit {false}}\end{array}}}Completeness requires any generated hypothesisHto explain all positive examplesE+{\textstyle E^{+}}, and consistency forbids generation of any hypothesisHthat is inconsistent with the negative examplesE−{\textstyle E^{-}}, both given the background knowledgeB. In Muggleton's setting of concept learning,[19]"completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "Necessity", which postulates thatBdoes not entailE+{\textstyle E^{+}}, does not impose a restriction onH, but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Weak consistency", which states that no contradiction can be derived fromB∧H{\textstyle B\land H}, forbids generation of any hypothesisHthat contradicts the background knowledgeB. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed.[19] In learning from interpretations, thepositiveandnegativeexamples are given as a set of complete or partialHerbrand structures, each of which are themselves a finite set of ground literals. Such a structureeis said to be a model of the set of clausesB∪H{\textstyle B\cup H}if for anysubstitutionθ{\textstyle \theta }and any clausehead←body{\textstyle \mathrm {head} \leftarrow \mathrm {body} }inB∪H{\textstyle B\cup H}such thatbodyθ⊆e{\textstyle \mathrm {body} \theta \subseteq e},headθ⊆e{\displaystyle \mathrm {head} \theta \subseteq e}also holds. The goal is then to output a hypothesis that iscomplete,meaning every positive example is a model ofB∪H{\textstyle B\cup H}, andconsistent,meaning that no negative example is a model ofB∪H{\textstyle B\cup H}.[16] Aninductive logic programming systemis a program that takes as an input logic theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}and outputs a correct hypothesisHwith respect to theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}. A system iscompleteif and only if for any input logic theoriesB,E+,E−{\displaystyle B,E^{+},E^{-}}any correct hypothesisHwith respect to these input theories can be found with its hypothesis search procedure. Inductive logic programming systems can be roughly divided into two classes, search-based and meta-interpretative systems. Search-based systems exploit that the space of possible clauses forms acomplete latticeunder thesubsumptionrelation, where one clauseC1{\textstyle C_{1}}subsumes another clauseC2{\textstyle C_{2}}if there is asubstitutionθ{\textstyle \theta }such thatC1θ{\textstyle C_{1}\theta }, the result of applyingθ{\textstyle \theta }toC1{\textstyle C_{1}}, is a subset ofC2{\textstyle C_{2}}. This lattice can be traversed either bottom-up or top-down. Bottom-up methods to search the subsumption lattice have been investigated since Plotkin's first work on formalising induction in clausal logic in 1970.[1][20]Techniques used include least general generalisation, based onanti-unification, and inverse resolution, based on inverting theresolutioninference rule. Aleast general generalisation algorithmtakes as input two clausesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}and outputs the least general generalisation ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}, that is, a clauseC{\textstyle C}that subsumesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}, and that is subsumed by every other clause that subsumesC1{\textstyle C_{1}}andC2{\textstyle C_{2}}. The least general generalisation can be computed by first computing allselectionsfromC{\textstyle C}andD{\textstyle D}, which are pairs of literals(L,M)∈(C1,C2){\displaystyle (L,M)\in (C_{1},C_{2})}sharing the same predicate symbol and negated/unnegated status. Then, the least general generalisation is obtained as the disjunction of the least general generalisations of the individual selections, which can be obtained byfirst-order syntactical anti-unification.[21] To account for background knowledge, inductive logic programming systems employrelative least general generalisations, which are defined in terms of subsumption relative to a background theory. In general, such relative least general generalisations are not guaranteed to exist; however, if the background theoryBis a finite set ofgroundliterals, then the negation ofBis itself a clause. In this case, a relative least general generalisation can be computed by disjoining the negation ofBwith bothC1{\textstyle C_{1}}andC2{\textstyle C_{2}}and then computing their least general generalisation as before.[22] Relative least general generalisations are the foundation of the bottom-up systemGolem.[8][9] Inverse resolution is aninductive reasoningtechnique that involvesinvertingtheresolution operator. Inverse resolution takes information about theresolventof a resolution step to compute possible resolving clauses. Two types of inverse resolution operator are in use in inductive logic programming: V-operators and W-operators. A V-operator takes clausesR{\textstyle R}andC1{\textstyle C_{1}}as input and returns a clauseC2{\textstyle C_{2}}such thatR{\textstyle R}is the resolvent ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}. A W-operator takes two clausesR1{\textstyle R_{1}}andR2{\textstyle R_{2}}and returns three clausesC1{\textstyle C_{1}},C2{\textstyle C_{2}}andC3{\textstyle C_{3}}such thatR1{\textstyle R_{1}}is the resolvent ofC1{\textstyle C_{1}}andC2{\textstyle C_{2}}andR2{\textstyle R_{2}}is the resolvent ofC2{\textstyle C_{2}}andC3{\textstyle C_{3}}.[23] Inverse resolution was first introduced byStephen Muggletonand Wray Buntine in 1988 for use in the inductive logic programming system Cigol.[6]By 1993, this spawned a surge of research into inverse resolution operators and their properties.[23] The ILP systems Progol,[11]Hail[24]and Imparo[25]find a hypothesisHusing the principle of theinverse entailment[11]for theoriesB,E,H:B∧H⊨E⟺B∧¬E⊨¬H{\displaystyle B\land H\models E\iff B\land \neg E\models \neg H}. First they construct an intermediate theoryFcalled a bridge theory satisfying the conditionsB∧¬E⊨F{\displaystyle B\land \neg E\models F}andF⊨¬H{\displaystyle F\models \neg H}. Then asH⊨¬F{\displaystyle H\models \neg F}, they generalize the negation of the bridge theoryFwith anti-entailment.[26]However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the inverse subsumption (anti-subsumption) operation instead, which is less non-deterministic than anti-entailment. Questions of completeness of a hypothesis search procedure of specific inductive logic programming system arise. For example, the Progol hypothesis search procedure based on the inverse entailment inference rule is not complete byYamamoto's example.[27]On the other hand, Imparo is complete by both anti-entailment procedure[28]and its extended inverse subsumption[29]procedure. Rather than explicitly searching the hypothesis graph, metainterpretive ormeta-levelsystems encode the inductive logic programming program as a meta-level logic program which is then solved to obtain an optimal hypothesis. Formalisms used to express the problem specification includePrologandanswer set programming, with existing Prolog systems and answer set solvers used for solving the constraints.[30] And example of a Prolog-based system isMetagol, which is based on ameta-interpreter in Prolog, while ASPAL and ILASP are based on an encoding of the inductive logic programming problem in answer set programming.[30] Evolutionary algorithmsin ILP use a population-based approach to evolve hypotheses, refining them through selection, crossover, and mutation. Methods likeEvoLearnerhave been shown to outperform traditional approaches on structured machine learning benchmarks.[31] Probabilistic inductive logic programming adapts the setting of inductive logic programming to learningprobabilistic logic programs. It can be considered as a form ofstatistical relational learningwithin the formalism of probabilistic logic programming.[34][35] Given the goal of probabilistic inductive logic programming is to find a probabilistic logic programH{\textstyle H}such that the probability of positive examples according toH∪B{\textstyle {H\cup B}}is maximized and the probability of negative examples is minimized.[35] This problem has two variants: parameter learning and structure learning. In the former, one is given the structure (the clauses) ofHand the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters ofH. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations.[35] Parameter learning for languages following the distribution semantics has been performed by using anexpectation-maximisation algorithmor bygradient descent. An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed. In the expectation step, the distribution of the hidden variables is computed according to the current values of the probability parameters, while in the maximisation step, the new values of the parameters are computed. Gradient descent methods compute the gradient of the target function and iteratively modify the parameters moving in the direction of the gradient.[35] Structure learning was pioneered byDaphne Kollerand Avi Pfeffer in 1997,[36]where the authors learn the structure offirst-orderrules with associated probabilistic uncertainty parameters. Their approach involves generating the underlyinggraphical modelin a preliminary step and then applying expectation-maximisation.[35] In 2008,De Raedtet al. presented an algorithm for performingtheory compressiononProbLogprograms, where theory compression refers to a process of removing as many clauses as possible from the theory in order to maximize the probability of a given set of positive and negative examples. No new clause can be added to the theory.[35][37] In the same year, Meert, W. et al. introduced a method for learning parameters and structure ofgroundprobabilistic logic programs by considering theBayesian networksequivalent to them and applying techniques for learning Bayesian networks.[38][35] ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming systemFOILwithProbLog. Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow one to predict the probability of the examples from their description. In this setting, the parameters (the probability values) are fixed and the structure has to be learned.[39][35] In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation.[40]Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as inProgolto guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory: the space of clauses is explored with abeam search, while the space of theories is searchedgreedily.[41][35] This article incorporates text from afree contentwork. Licensed under CC-BY 4.0 (license statement/permission). Text taken fromA History of Probabilistic Inductive Logic Programming​, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese,Frontiers Media.
https://en.wikipedia.org/wiki/Inductive_logic_programming
Inlinear algebra, themodal matrixis used in thediagonalization processinvolvingeigenvalues and eigenvectors.[1] Specifically the modal matrixM{\displaystyle M}for the matrixA{\displaystyle A}is then×nmatrix formed with the eigenvectors ofA{\displaystyle A}as columns inM{\displaystyle M}. It is utilized in thesimilarity transformation whereD{\displaystyle D}is ann×ndiagonal matrixwith the eigenvalues ofA{\displaystyle A}on the main diagonal ofD{\displaystyle D}and zeros elsewhere. The matrixD{\displaystyle D}is called thespectral matrixforA{\displaystyle A}. The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right inM{\displaystyle M}.[2] The matrix has eigenvalues and corresponding eigenvectors A diagonal matrixD{\displaystyle D},similartoA{\displaystyle A}is One possible choice for aninvertible matrixM{\displaystyle M}such thatD=M−1AM,{\displaystyle D=M^{-1}AM,}is Note that since eigenvectors themselves are not unique, and since the columns of bothM{\displaystyle M}andD{\displaystyle D}may be interchanged, it follows that bothM{\displaystyle M}andD{\displaystyle D}are not unique.[4] LetA{\displaystyle A}be ann×nmatrix. Ageneralized modal matrixM{\displaystyle M}forA{\displaystyle A}is ann×nmatrix whose columns, considered as vectors, form acanonical basisforA{\displaystyle A}and appear inM{\displaystyle M}according to the following rules: One can show that whereJ{\displaystyle J}is a matrix inJordan normal form. By premultiplying byM−1{\displaystyle M^{-1}}, we obtain Note that when computing these matrices, equation (1) is the easiest of the two equations to verify, since it does not requireinvertinga matrix.[6] This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.[7]The matrix has a single eigenvalueλ1=1{\displaystyle \lambda _{1}=1}withalgebraic multiplicityμ1=7{\displaystyle \mu _{1}=7}. A canonical basis forA{\displaystyle A}will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; seegeneralized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors{x3,x2,x1}{\displaystyle \left\{\mathbf {x} _{3},\mathbf {x} _{2},\mathbf {x} _{1}\right\}}, one chain of two vectors{y2,y1}{\displaystyle \left\{\mathbf {y} _{2},\mathbf {y} _{1}\right\}}, and two chains of one vector{z1}{\displaystyle \left\{\mathbf {z} _{1}\right\}},{w1}{\displaystyle \left\{\mathbf {w} _{1}\right\}}. An "almost diagonal" matrixJ{\displaystyle J}inJordan normal form, similar toA{\displaystyle A}is obtained as follows: whereM{\displaystyle M}is a generalized modal matrix forA{\displaystyle A}, the columns ofM{\displaystyle M}are a canonical basis forA{\displaystyle A}, andAM=MJ{\displaystyle AM=MJ}.[8]Note that since generalized eigenvectors themselves are not unique, and since some of the columns of bothM{\displaystyle M}andJ{\displaystyle J}may be interchanged, it follows that bothM{\displaystyle M}andJ{\displaystyle J}are not unique.[9]
https://en.wikipedia.org/wiki/Modal_matrix
Alist of web service frameworks:
https://en.wikipedia.org/wiki/List_of_web_service_frameworks
Neogeography(literally "new geography") is the use of geographical techniques and tools for personal and community activities or by a non-expert group of users.[1]Application domains of neogeography are typically not formal or analytical.[2] From the point of view of human geography,neogeographycould be also defined as the use of new specific information society tools, especially the Internet, to the aims and purposes of geography as an academic discipline; in all branches of geographical thought and incorporating contributions from outside of geography performed by non-specialist users in this discipline through the use of specific geographic ICT tools. This new definition, complementing previous ones, restores to academic geography the leading role proponents claim it should play when considering a renewal of the discipline with the rigor and right granted by its centuries-existence, but also includes the interesting social phenomenon of citizen participation in the geographical knowledge from its dual role: as undoubted possibility of enrichment for geography and as social phenomenon with geographic interest.[citation needed] The termneogeographyhas been used since at least 1922. In the early 1950s in the U.S. it was a term used in thesociologyof production & work. The French philosopherFrançois Dagognetused it in the title of his 1977 bookUne Epistemologie de l'espace concret: Neo-geographie. The word was first used in relation to the study of online communities in the 1990s by Kenneth Dowling, the Librarian of the City and County ofSan Francisco.[3]Immediate precursor terms in the industry press were: "the geospatial Web" and "the geoaware Web" (both 2005); "Where 2.0" (2005); "a dissident cartographic aesthetic" and "mapping and counter-mapping" (2006).[3]These terms arose with the concept ofWeb 2.0, around the increased public appeal of mapping andgeospatialtechnologies that occurred with the release of such tools as "slippy maps" such asGoogle Maps,Google Earth, and also with the decreased cost of geolocated mobile devices such asGPSunits. Subsequently, the use of geospatial technologies began to see increased integration with non-geographically focused applications. The term neogeography was first defined in its contemporary sense by Randall Szott in 2006. He argued for a broad scope, to include artists,psychogeography, and more. The technically oriented aspects of the field, far more tightly defined than in Scott's definition, were outlined by Andrew Turner in hisIntroduction to Neogeography(O'Reilly, 2006). The contemporary use of the term, and the field in general, owes much of its inspiration to thelocative mediamovement that sought to expand the use of location-based technologies to encompass personal expression and society.[3] TraditionalGeographic Information Systemshistorically have developed tools and techniques targeted towards formal applications that require precision and accuracy. By contrast, neogeography tends to apply to the areas of approachable, colloquial applications. The two realms can have overlap as the same problems are presented to different sets of users: experts and non-experts.[citation needed] Neogeography has also been connected[4]with the increase in user-generated geographic content, closely related toVolunteered Geographic Information.[5]This can be an active collection of data such asOpenStreetMapor passive collection of user-data such as Flickr tags for folksonomic toponyms. While involving non-trained volunteers in the data creation process, research proves users perceive volunteered geographic information as highly valuable and trustworthy.[6][7][8] There is currently much debate about the scope and application of neogeography in the web mapping, geography, and GIS fields. Some of this discussion considers neogeography to be the ease of use of geographic tools and interfaces while other points focus on the domains of application. Neogeography is not limited to a specific technology and is not strictly web-based, so is not synonymous withweb mappingthough it is commonly conceived as such. A number of geographers and geoinformatics scientists (such as Mike Goodchild[9]) have expressed strong reservations about the term "neogeography". They say thatgeographyis an established scientific discipline; uses such as mashups and tags in Google Earth are not scientific works, but are better described asVolunteered Geographic Information. There are also a great many artists and inter-disciplinary practitioners involved in an engagement with new forms of mapping and locative art.[10]It is thus far wider than simplyweb mapping.
https://en.wikipedia.org/wiki/Neogeography
Ingraph theory, theshortest path problemis the problem of finding apathbetween twovertices(or nodes) in agraphsuch that the sum of theweightsof its constituent edges is minimized.[1] The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment. The shortest path problem can be defined forgraphswhetherundirected,directed, ormixed. The definition for undirected graphs states that every edge can be traversed in either direction. Directed graphs require that consecutive vertices be connected by an appropriate directed edge.[2] Two vertices are adjacent when they are both incident to a common edge. Apathin an undirected graph is asequenceof verticesP=(v1,v2,…,vn)∈V×V×⋯×V{\displaystyle P=(v_{1},v_{2},\ldots ,v_{n})\in V\times V\times \cdots \times V}such thatvi{\displaystyle v_{i}}is adjacent tovi+1{\displaystyle v_{i+1}}for1≤i<n{\displaystyle 1\leq i<n}. Such a pathP{\displaystyle P}is called a path of lengthn−1{\displaystyle n-1}fromv1{\displaystyle v_{1}}tovn{\displaystyle v_{n}}. (Thevi{\displaystyle v_{i}}are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.) LetE={ei,j}{\displaystyle E=\{e_{i,j}\}}whereei,j{\displaystyle e_{i,j}}is the edge incident to bothvi{\displaystyle v_{i}}andvj{\displaystyle v_{j}}. Given areal-valuedweight functionf:E→R{\displaystyle f:E\rightarrow \mathbb {R} }, and an undirected (simple) graphG{\displaystyle G}, the shortest path fromv{\displaystyle v}tov′{\displaystyle v'}is the pathP=(v1,v2,…,vn){\displaystyle P=(v_{1},v_{2},\ldots ,v_{n})}(wherev1=v{\displaystyle v_{1}=v}andvn=v′{\displaystyle v_{n}=v'}) that over all possiblen{\displaystyle n}minimizes the sum∑i=1n−1f(ei,i+1).{\displaystyle \sum _{i=1}^{n-1}f(e_{i,i+1}).}When each edge in the graph has unit weight orf:E→{1}{\displaystyle f:E\rightarrow \{1\}}, this is equivalent to finding the path with fewest edges. The problem is also sometimes called thesingle-pair shortest path problem, to distinguish it from the following variations: These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. Several well-known algorithms exist for solving this problem and its variants. Additional algorithms and associated evaluations may be found inCherkassky, Goldberg & Radzik (1996). An algorithm usingtopological sortingcan solve the single-source shortest path problem in timeΘ(E+V)in arbitrarily-weighted directed acyclic graphs.[3] The following table is taken fromSchrijver (2004), with some corrections and additions. A green background indicates an asymptotically best bound in the table;Lis the maximum length (or weight) among all edges, assuming integer edge weights. Finds a negative cycle or calculates distances to all vertices. Network flows[6]are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node. Shortest Path Problemscan be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems. [7] The all-pairs shortest path problem finds the shortest paths between every pair of verticesv,v'in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced byShimbel (1953), who observed that it could be solved by a linear number of matrix multiplications that takes a total time ofO(V4). Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions onweb mappingwebsites likeMapQuestorGoogle Maps. For this application fast specialized algorithms are available.[10] If one represents a nondeterministicabstract machineas a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like aRubik's Cubeand each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves. In anetworkingortelecommunicationsmindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with awidest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path.[11] A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film. Other applications, often studied inoperations research, include plant and facility layout,robotics,transportation, andVLSIdesign.[12] A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension.[13]There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs. All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network. The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond.[14]Other techniques that have been used are: For shortest path problems incomputational geometry, seeEuclidean shortest path. The shortest multiple disconnected path[15]is a representation of the primitive path network within the framework ofReptation theory. Thewidest path problemseeks a path so that the minimum label of any edge is as large as possible. Other related problems may be classified into the following categories. Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are calledConstrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem,[16]which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problemNP-complete(such problems are not believed to be efficiently solvable for large sets of data, seeP = NP problem). AnotherNP-completeexample requires a specific set of vertices to be included in the path,[17]which makes the problem similar to theTraveling Salesman Problem(TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem offinding the longest pathin a graph is also NP-complete. TheCanadian traveller problemand the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic.[18][19] Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to usea variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights. In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose: Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of asemiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as thealgebraic path problem.[21][22][23] Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures.[24] More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner ofvaluation algebras.[25] In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network.[26][27] There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability. To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such asdynamic programmingandDijkstra's algorithm.[28]These methods usestochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length.[29]The termstravel time reliabilityandtravel time variabilityare used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions. To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. Themost reliable pathis one that maximizes the probability of arriving on time given a travel time budget. Anα-reliable pathis one that minimizes the travel time budget required to arrive on time with a given probability.
https://en.wikipedia.org/wiki/Shortest_path_problem
TheMD4 Message-Digest Algorithmis acryptographic hash functiondeveloped byRonald Rivestin 1990.[3]The digest length is 128 bits. The algorithm has influenced later designs, such as theMD5,SHA-1andRIPEMDalgorithms. The initialism "MD" stands for "Message Digest". The security of MD4 has been severely compromised. The first fullcollision attackagainst MD4 was published in 1995, and several newer attacks have been published since then. As of 2007, an attack can generate collisions in less than two MD4 hash operations.[2]A theoreticalpreimage attackalso exists. A variant of MD4 is used in theed2k URI schemeto provide a unique identifier for a file in the popular eDonkey2000 / eMule P2P networks. MD4 was also used by thersyncprotocol (prior to version 3.0.0). MD4 is used to computeNTLMpassword-derived key digests on Microsoft Windows NT, XP, Vista, 7, 8, 10 and 11.[4] Weaknesses in MD4 were demonstrated by Den Boer and Bosselaers in a paper published in 1991.[5]The first full-round MD4collision attackwas found byHans Dobbertinin 1995, which took only seconds to carry out at that time.[6]In August 2004,Wanget al. found a very efficient collision attack, alongside attacks on later hash function designs in the MD4/MD5/SHA-1/RIPEMD family. This result was improved later by Sasaki et al., and generating a collision is now as cheap as verifying it (a few microseconds).[2] In 2008, thepreimage resistanceof MD4 was also broken by Gaëtan Leurent, with a 2102attack.[7]In 2010 Guo et al published a 299.7attack.[8] In 2011, RFC 6150 stated that RFC 1320 (MD4) ishistoric(obsolete). The 128-bit (16-byte) MD4 hashes (also termedmessage digests) are typically represented as 32-digithexadecimalnumbers. The following demonstrates a 43-byteASCIIinput and the corresponding MD4 hash: Even a small change in the message will (with overwhelming probability) result in a completely different hash, e.g. changingdtoc: The hash of the zero-length string is: The following test vectors are defined in RFC 1320 (The MD4 Message-Digest Algorithm) Let: Note that two hex-digits of k1 and k2 define one byte of the input string, whose length is 64 bytes .
https://en.wikipedia.org/wiki/MD4
In general usage, afinancial planis a comprehensive evaluation of an individual's current pay and future financial state by using current known variables to predict future income, asset values and withdrawal plans.[1]This often includes abudgetwhich organizes an individual's finances and sometimes includes a series of steps or specific goals for spending andsavingin the future. This plan allocates future income to various types ofexpenses, such as rent or utilities, and also reserves some income for short-term and long-term savings. A financial plan is sometimes referred to as aninvestmentplan, but inpersonal finance, a financial plan can focus on other specific areas such as risk management, estates, college, or retirement. In business, "financial forecast" or "financial plan" can also refer to a projection across a time horizon, typically an annual one, of income and expenses for acompany, division, or department;[2]seeBudget § Corporate budget. More specifically, a financial plan can also refer to the three primaryfinancial statements(balance sheet,income statement, andcash flow statement) created within abusiness plan. A financial plan can also be anestimation of cash needsand a decision on how to raise the cash, such as through borrowing or issuing additional shares in a company.[3] Note that the financial plan may then containprospective financial statements, which are similar, but different, to those of abudget. Financial plans are the entire financial accounting overview of a company. Complete financial plans contain all periods and transaction types. It's a combination of the financial statements which independently only reflect a past, present, or future state of the company. Financial plans are the collection of the historical, present, and future financial statements; for example, a (historical & present) costly expense from an operational issue is normally presented prior to the issuance of the prospective financial statements which propose a solution to said operational issue. The confusion surrounding the term "financial plans" might stem from the fact that there are many types of financial statement reports. Individually, financial statements show either the past, present, or future financial results. More specifically, financial statements also only reflect the specific categories which are relevant. For instance, investing activities are not adequately displayed in a balance sheet. A financial plan is a combination of the individual financial statements and reflect all categories of transactions (operations & expenses & investing) over time.[4] Some period-specific financial statement examples includepro formastatements (historical period) andprospective statements(current and future period). Compilations are a type of service which involves "presenting, in the form of financial statements, information that is the representation of management".[5]There are two types of "prospective financial statements":financial forecasts&financial projectionsand both relate to the current/future time period.Prospective financial statementsare a time period-type of financial statement which may reflect the current/future financial status of a company using three main reports/financial statements: cash flow statement, income statement, and balance sheet. "Prospective financial statementsare of two types-forecastsandprojections. Forecasts are based on management's expected financial position, results of operations, and cash flows."[6]Pro Forma statements take previously recorded results, the historical financial data, and present a "what-if": "what-if" a transaction had happened sooner.[7] While the common usage of the term "financial plan" often refers to a formal and defined series of steps or goals, there is some technical confusion about what the term "financial plan" actually means in the industry. For example, one of the industry's leading professional organizations, the Certified Financial Planner Board of Standards, lacks any definition for the term "financial plan" in itsStandards of Professional Conductpublication. This publication outlines the professional financial planner's job, and explains the process of financial planning, but the term "financial plan" never appears in the publication's text.[8] The accounting and finance industries have distinct responsibilities and roles. When the products of their work are combined, it produces a complete picture, a financial plan. A financial analyst studies the data and facts (regulations/standards), which are processed, recorded, and presented by accountants. Normally, finance personnel study the data results - meaning what has happened or what might happen - and propose a solution to an inefficiency. Investors and financial institutions must see both the issue and the solution to make an informed decision. Accountants and financial planners are both involved with presenting issues and resolving inefficiencies, so together, the results and explanation are provided in afinancial plan. Textbooks used in universities offering financial planning-related courses also generally do not define the term 'financial plan'. For example, Sid Mittra, Anandi P. Sahu, and Robert A Crane, authors ofPracticing Financial Planning for Professionals[9]do not define what a financial plan is, but merely defer to the Certified Financial Planner Board of Standards' definition of 'financial planning'. When drafting a financial plan, the company should establish the planning horizon,[10]which is the time period of the plan, whether it be on a short-term (usually 12 months) or long-term (two to five years) basis. Also, the individual projects and investment proposals of each operational unit within the company should be totaled and treated as one large project. This process is called aggregation.[11]
https://en.wikipedia.org/wiki/Financial_planning
Inprobability theoryandstatistics,Wallenius' noncentral hypergeometric distribution(named after Kenneth Ted Wallenius) is a generalization of thehypergeometric distributionwhere items are sampled withbias. This distribution can be illustrated as anurn modelwith bias. Assume, for example, that an urn containsm1red balls andm2white balls, totallingN=m1+m2balls. Each red ball has the weight ω1and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1/ ω2. Now we are takingnballs, one by one, in such a way that the probability of taking a particular ball at a particular draw is equal to its proportion of the total weight of all balls that lie in the urn at that moment. The number of red ballsx1that we get in this experiment is arandom variablewith Wallenius' noncentral hypergeometric distribution. The matter is complicated by the fact that there is more than one noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution is obtained if balls are sampled one by one in such a way that there iscompetitionbetween the balls.Fisher's noncentral hypergeometric distributionis obtained if the balls are sampled simultaneously or independently of each other. Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name. The two distributions are both equal to the (central)hypergeometric distributionwhen theodds ratiois 1. The difference between these two probability distributions is subtle. See the Wikipedia entry onnoncentral hypergeometric distributionsfor a more detailed explanation. Wallenius' distribution is particularly complicated because each ball has a probability of being taken that depends not only on its weight, but also on the total weight of its competitors. And the weight of the competing balls depends on the outcomes of all preceding draws. This recursive dependency gives rise to adifference equationwith a solution that is given inopen formby the integral in the expression of the probability mass function in the table above. Closed form expressionsfor the probability mass function exist (Lyons, 1980), but they are not very useful for practical calculations because of extremenumerical instability, except in degenerate cases. Several other calculation methods are used, includingrecursion,Taylor expansionandnumerical integration(Fog, 2007, 2008). The most reliable calculation method is recursive calculation of f(x,n) from f(x,n-1) and f(x-1,n-1) using the recursion formula given below under properties. The probabilities of all (x,n) combinations on all possibletrajectoriesleading to the desired point are calculated, starting with f(0,0) = 1 as shown on the figure to the right. The total number of probabilities to calculate isn(x+1)-x2. Other calculation methods must be used whennandxare so big that this method is too inefficient. The probability that all balls have the same color is easier to calculate. See the formula below under multivariate distribution. No exact formula for the mean is known (short of complete enumeration of all probabilities). The equation given above is reasonably accurate. This equation can be solved for μ byNewton-Raphson iteration. The same equation can be used for estimating the odds from an experimentally obtained value of the mean. Wallenius' distribution has fewer symmetry relations thanFisher's noncentral hypergeometric distributionhas. The only symmetry relates to the swapping of colors: Unlike Fisher's distribution, Wallenius' distribution has no symmetry relating to the number of ballsnottaken. The following recursion formula is useful for calculating probabilities: Another recursion formula is also known: The probability is limited by where the underlined superscript indicates thefalling factorialab_=a(a−1)…(a−b+1){\displaystyle a^{\underline {b}}=a(a-1)\ldots (a-b+1)}. The distribution can be expanded to any number of colorscof balls in the urn. The multivariate distribution is used when there are more than two colors. The probability mass function can be calculated by variousTaylor expansionmethods or bynumerical integration(Fog, 2008). The probability that all balls have the same color,j, can be calculated as: forxj=n≤mj, where the underlined superscript denotes thefalling factorial. A reasonably good approximation to the mean can be calculated using the equation given above. The equation can be solved by defining θ so that and solving for θ byNewton-Raphson iteration. The equation for the mean is also useful for estimating the odds from experimentally obtained values for the mean. No good way of calculating the variance is known. The best known method is to approximate the multivariate Wallenius distribution by a multivariateFisher's noncentral hypergeometric distributionwith the same mean, and insert the mean as calculated above in the approximate formula for the variance of the latter distribution. The order of the colors is arbitrary so that any colors can be swapped. The weights can be arbitrarily scaled: Colors with zero number (mi= 0) or zero weight (ωi= 0) can be omitted from the equations. Colors with the same weight can be joined: wherehypg⁡(x;n,m,N){\displaystyle \operatorname {hypg} (x;n,m,N)}is the (univariate, central) hypergeometric distribution probability. The balls that arenottaken in the urn experiment have a distribution that is different from Wallenius' noncentral hypergeometric distribution, due to a lack of symmetry. The distribution of the balls not taken can be called thecomplementary Wallenius' noncentral hypergeometric distribution. Probabilities in the complementary distribution are calculated from Wallenius' distribution by replacingnwithN-n,xiwithmi-xi, and ωiwith 1/ωi.
https://en.wikipedia.org/wiki/Wallenius%27_noncentral_hypergeometric_distribution
Bcachefsis acopy-on-write(COW)file systemforLinux-based operating systems. Its primary developer, Kent Overstreet, first announced it in 2015, and it was added to theLinux kernelbeginning with 6.7.[1][2]It is intended to compete with the modern features ofZFSorBtrfs, and the speed and performance ofext4orXFS. Bcachefs is acopy-on-write(COW)file systemforLinux-based operating systems.[3]Features includecaching,[4]full file-systemencryptionusing theChaCha20andPoly1305algorithms,[5]nativecompression[4]viaLZ4,gzip[6]andZstandard,[7]snapshots,[4]CRC-32Cand 64-bitchecksumming.[3]It can span block devices, including inRAIDconfigurations.[5] Earlier versions of Bcachefs provided all the functionality ofBcache, a block-layercachesystem for Linux, with which Bcachefs shares about 80% of its code.[8]As of December 2021, the block-layer cache functionality has been removed.[7] On a data structure level, bcachefs usesB-treeslike many other modern file systems, but with an unusually large node size defaulting to 256 KiB. These nodes are internallylog-structured, forming a hybrid data structure, reducing the need for rewriting nodes on update.[9]Snapshots are not implemented by cloning a COW tree, but by adding a version number to filesystem objects.[10]The COW feature and the bucket allocator enables a RAID implementation which is claimed to not suffer from thewrite holenor IO fragmentation.[7] Bcachefs describes itself as "working and stable, with a small community of users".[11]When discussing Linux 6.9-rc3 on April 7, 2024,Linus Torvaldstouched on the stability of bcachefs, saying "if you thought bcachefs was stable already,I have a bridge to sell you",[12]and in August of 2024 that "nobody sane uses bcachefs and expects it to be stable".[13] In August 2024, the Debian maintainer of bcachefs-tools, a package providing "userspace tools and docs", orphaned the package, questioning its long term supportability.[14]The maintainer further commented in a blog post that: "I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it."[15] Primary development has been by Kent Overstreet, the developer ofBcache, which he describes as a "prototype" for the ideas that became Bcachefs. Overstreet intends Bcachefs to replace Bcache.[8]Overstreet has stated that development of Bcachefs began as Bcache's developers realized that its codebase had "been evolving ... into a full blown, general-purposePOSIXfilesystem", and that "there was a really clean and elegant design" within it if they took it in that direction. Some time after Bcache was merged in 2013 into the mainline Linux kernel, Overstreet left his job atGoogleto work full-time on Bcachefs.[3] After a few years' unfunded development, Overstreet announced Bcachefs in 2015, at which point he called the code "more or less feature complete", and called for testers and contributors. He intended it to be an advanced file system with modern features[16]like those ofZFSorBtrfs, with the speed and performance of file systems such asext4andXFS.[3]As of 2017 Overstreet was receiving financial support for the development of Bcachefs viaPatreon.[5] As of mid-2018, the on-disk format had settled.[8]Patches had been submitted for review to have Bcachefs included in the mainline Linux kernel, but had not yet been accepted.[4] By mid-2019, the desired features of Bcachefs were completed and the associated patches toLKMLwere submitted for peer review.[17][18]In October 2023 Bcachefs was merged into the Linux 6.7 kernel,[19]which was released in January 2024.[2] In November 2024, Kent Overstreet was restricted by Linux's Code of Conduct Committee from sending in contributions during the Linux 6.13 kernel development cycle due to "written abuse of another community member" and taking "insufficient action to restore the community's faith in having otherwise productive technical discussions without the fear of personal attacks".[20][21]Patches were later accepted without issue during the Linux 6.14 kernel development.[22]
https://en.wikipedia.org/wiki/Bcachefs
TheTexas sharpshooter fallacyis aninformal fallacywhich is committed when differences in data are ignored, but similarities are overemphasized. From this reasoning, a false conclusion is inferred.[1]This fallacy is the philosophical or rhetorical application of themultiple comparisonsproblem (in statistics) andapophenia(in cognitive psychology). It is related to theclustering illusion, which is the tendency in humancognitionto interpret patterns where none actually exist. The name comes from a metaphor about a person fromTexaswho fires a gun at the side of a barn, then paints ashooting targetcentered on the tightestclusterof shots and claims to be asharpshooter.[2][3][4] The Texas sharpshooter fallacy often arises when a person has a large amount of data at their disposal but only focuses on a small subset of that data. Some factor other than the one attributed may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person attempts to account for the likelihood of findingsomesubset in the large data withsomecommon property by a factor other than its actual cause, then that person is likely committing a Texas sharpshooter fallacy. The fallacy is characterized by a lack of a specific hypothesis prior to the gathering of data, or the formulation of a hypothesis only after data have already been gathered and examined.[5]Thus, it typically does not apply if one had anex ante, or prior, expectation of the particular relationship in question before examining the data. For example, one might, prior to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if a second set of additional information can be generated using the same process as the original information, one can use the first (original) set of information to construct a hypothesis, and then test the hypothesis on the second (new) set of information. (Seehypothesis testing.) However, after constructing a hypothesis on a set of data, one would be committing the Texas sharpshooter fallacy if they then tested that hypothesis on the same data (seehypotheses suggested by the data). A Swedish study in 1992 tried to determine whetherpower lines caused some kind of poor health effects.[6]The researchers surveyed people living within 300 metres ofhigh-voltage power linesover 25 years and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those who lived closest to the power lines, which spurred calls to action by the Swedish government.[7]The problem with the conclusion, however, was that the number of potential ailments, i.e., over 800, was so large that it created a high probability that at least one ailment would have a statistically significant correlation with living distance from power lines by chance alone, a situation known as themultiple comparisons problem. Subsequent studies failed to show any association between power lines and childhood leukemia.[8] The fallacy is often found in modern-day interpretations of thequatrainsofNostradamus. Nostradamus's quatrains are often liberally translated from their original (archaic) French versions, in which their historical context is often lost, and then applied to support the erroneous conclusion that Nostradamus predicted a given modern-day event after the event actually occurred.[9]
https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
Btrfs(pronounced as "better F S",[9]"butter F S",[13][14]"b-tree F S",[14]or "B.T.R.F.S.") is a computer storage format that combines afile systembased on thecopy-on-write(COW) principle with alogical volume manager(distinct from Linux'sLVM), developed together. It was created by Chris Mason in 2007[15]for use inLinux, and since November 2013, the file system's on-disk format has been declared stable in the Linuxkernel.[16] Btrfs is intended to address the lack of pooling,snapshots,integrity checking,data scrubbing, and integral multi-device spanning inLinux file systems.[9]Mason, the principal Btrfs author, stated that its goal was "to let [Linux] scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable".[17] The core data structure of Btrfs‍—‌the copy-on-writeB-tree‍—‌was originally proposed byIBMresearcher Ohad Rodeh at aUSENIXconference in 2007.[18]Mason, an engineer working onReiserFSforSUSEat the time, joined Oracle later that year and began work on a new file system based on these B-trees.[19] In 2008, the principal developer of theext3andext4file systems,Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance; it uses old technology and is a stop-gap. Ts'o said that Btrfs is the better direction because "it offers improvements in scalability, reliability, and ease of management".[20]Btrfs also has "a number of the same design ideas thatreiser3/4had".[21] Btrfs 1.0, with finalized on-disk format, was originally slated for a late-2008 release,[22]and was finally accepted into theLinux kernel mainlinein 2009.[23]SeveralLinux distributionsbegan offering Btrfs as an experimental choice ofroot file systemduring installation.[24][25][26] In July 2011, Btrfs automaticdefragmentationand scrubbing features were merged into version 3.0 of theLinux kernel mainline.[27]Besides Mason at Oracle, Miao Xie at Fujitsu contributed performance improvements.[28]In June 2012, Mason left Oracle forFusion-io, which he left a year later with Josef Bacik to joinFacebook. While at both companies, Mason continued his work on Btrfs.[29][19] In 2012, two Linux distributions moved Btrfs from experimental to production or supported status:Oracle Linuxin March,[30]followed bySUSE Linux Enterprisein August.[31] In 2015, Btrfs was adopted as the default filesystem forSUSE Linux Enterprise Server(SLE) 12.[32] In August 2017, Red Hat announced in the release notes forRed Hat Enterprise Linux(RHEL) 7.4 that it no longer planned to move Btrfs to a fully supported feature (it's been included as a "technology preview" since RHEL 6 beta) noting that it would remain available in the RHEL 7 release series.[33]Btrfs was removed from RHEL 8 in May 2019.[34]RHEL moved from ext4 in RHEL 6 toXFSin RHEL 7.[35] In 2020, Btrfs was selected as the default file system forFedora33 for desktop variants.[36] As of version 6.0 of the Linux kernel, Btrfs implements the following features:[37][38][39] Btrfs provides acloneoperation thatatomicallycreates a copy-on-write snapshot of afile. Such cloned files are sometimes referred to asreflinks, in light of the proposed associated Linux kernelsystem call.[56] By cloning, the file system does not create a new link pointing to an existinginode; instead, it creates a new inode that initially shares the same disk blocks with the original file. As a result, cloning works only within the boundaries of the same Btrfs file system, but since version 3.6 of the Linux kernel it may cross the boundaries of subvolumes under certain circumstances.[57][58]The actual data blocks are not duplicated; at the same time, due to the copy-on-write (CoW) nature of Btrfs, modifications to any of the cloned files are not visible in the original file and vice versa.[59] Cloning should not be confused withhard links, which are directory entries that associate multiple file names with a single file. While hard links can be taken as different names for the same file, cloning in Btrfs provides independent files that initially share all their disk blocks.[59][60] Support for this Btrfs feature was added in version 7.5 of theGNU coreutils, via the--reflinkoption to thecpcommand.[61][62] In addition to data cloning (FICLONE), Btrfs also supports out-of-band deduplication viaFIDEDUPERANGE. This functionality allows two files with (even partially) identical data to share storage.[63][10] A Btrfs subvolume can be thought of as a separate POSIX filenamespace,mountableseparately by passingsubvolorsubvolidoptions to themount(8)utility. It can also be accessed by mounting the top-level subvolume, in which case subvolumes are visible and accessible as its subdirectories.[64] Subvolumes can be created at any place within the file system hierarchy, and they can also be nested. Nested subvolumes appear as subdirectories within their parent subvolumes, similarly to the way a top-level subvolume presents its subvolumes as subdirectories. Deleting a subvolume is not possible until all subvolumes below it in the nesting hierarchy are deleted; as a result, top-level subvolumes cannot be deleted.[65] Any Btrfs file system always has a default subvolume, which is initially set to be the top-level subvolume, and is mounted by default if no subvolume selection option is passed tomount. The default subvolume can be changed as required.[65] A Btrfssnapshotis a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs' copy-on-write capabilities, and modifications to a snapshot are not visible in the original subvolume. Once a writable snapshot is made, it can be treated as an alternate version of the original file system. For example, to roll back to a snapshot, a modified original subvolume needs to be unmounted and the snapshot needs to be mounted in its place. At that point, the original subvolume may also be deleted.[64] The copy-on-write (CoW) nature of Btrfs means that snapshots are quickly created, while initially consuming very little disk space. Since a snapshot is a subvolume, creating nested snapshots is also possible. Taking snapshots of a subvolume is not a recursive process; thus, if a snapshot of a subvolume is created, every subvolume or snapshot that the subvolume already contains is mapped to an empty directory of the same name inside the snapshot.[64][65] Taking snapshots of a directory is not possible, as only subvolumes can have snapshots. However, there is a workaround that involves reflinks spread across subvolumes: a new subvolume is created, containing cross-subvolume reflinks to the content of the targeted directory. Having that available, a snapshot of this new volume can be created.[57] A subvolume in Btrfs is quite different from a traditionalLogical Volume Manager(LVM) logical volume. With LVM, a logical volume is a separateblock device, while a Btrfs subvolume is not and it cannot be treated or used that way.[64]Making dd or LVM snapshots of btrfs leads to data loss if either the original or the copy is mounted while both are on the same computer.[66] Given any pair of subvolumes (or snapshots), Btrfs can generate a binarydiffbetween them (by using thebtrfs sendcommand) that can be replayed later (by usingbtrfs receive), possibly on a different Btrfs file system. The send–receive feature effectively creates (and applies) a set of data modifications required for converting one subvolume into another.[50][67] The send/receive feature can be used with regularly scheduled snapshots for implementing a simple form of file systemreplication, or for the purpose of performingincremental backups.[50][67] Aquota group(orqgroup) imposes an upper limit to the space a subvolume or snapshot may consume. A new snapshot initially consumes no quota because its data is shared with its parent, but thereafter incurs a charge for new files and copy-on-write operations on existing files. When quotas are active, a quota group is automatically created with each new subvolume or snapshot. These initial quota groups are building blocks which can be grouped (with thebtrfs qgroupcommand) into hierarchies to implement quota pools.[52] Quota groups only apply to subvolumes and snapshots, while having quotas enforced on individual subdirectories, users, or user groups is not possible. However, workarounds are possible by using different subvolumes for all users or user groups that require a quota to be enforced. As the result of having very little metadata anchored in fixed locations, Btrfs can warp to fit unusual spatial layouts of the backend storage devices. Thebtrfs-converttool exploits this ability to do an in-place conversion of an ext2/3/4 orReiserFSfile system, by nesting the equivalent Btrfs metadata in its unallocated space—while preserving an unmodified copy of the original file system.[68] The conversion involves creating a copy of the whole ext2/3/4 metadata, while the Btrfs files simply point to the same blocks used by the ext2/3/4 files. This makes the bulk of the blocks shared between the two filesystems before the conversion becomes permanent. Thanks to the copy-on-write nature of Btrfs, the original versions of the file data blocks are preserved during all file modifications. Until the conversion becomes permanent, only the blocks that were marked as free in ext2/3/4 are used to hold new Btrfs modifications, meaning that the conversion can be undone at any time (although doing so will erase any changes made after the conversion to Btrfs).[68] All converted files are available and writable in the default subvolume of the Btrfs. A sparse file holding all of the references to the original ext2/3/4 filesystem is created in a separate subvolume, which is mountable on its own as a read-only disk image, allowing both original and converted file systems to be accessed at the same time. Deleting this sparse file frees up the space and makes the conversion permanent.[68] In 4.x versions of the mainline Linux kernel, the in-place ext3/4 conversion was considered untested and rarely used.[68]However, the feature was rewritten from scratch in 2016 forbtrfs-progs4.6.[48]and has been considered stable since then. In-place conversion from ReiserFS was introduced in September 2017 with kernel 4.13.[69] When creating a new Btrfs, an existing Btrfs can be used as a read-only "seed" file system.[70]The new file system will then act as a copy-on-write overlay on the seed, as a form ofunion mounting. The seed can be later detached from the Btrfs, at which point the rebalancer will simply copy over any seed data still referenced by the new file system before detaching. Mason has suggested this may be useful for aLive CDinstaller, which might boot from a read-only Btrfs seed on an optical disc, rebalance itself to the target partition on the install disk in the background while the user continues to work, then eject the disc to complete the installation without rebooting.[71] In his 2009 interview, Mason stated that support for encryption was planned for Btrfs.[9]In the meantime, a workaround for combining encryption with Btrfs is to use a full-disk encryption mechanism such asdm-crypt/LUKSon the underlying devices and to create the Btrfs filesystem on top of that layer. As of 2020,[update]the developers were working to add keyed hash likeHMAC(SHA256).[72] Unix systems traditionally rely on "fsck" programs to check and repair filesystems. This functionality is implemented via thebtrfs checkprogram. Since version 4.0 this functionality is deemed relatively stable. However, as of December 2022, the btrfs documentation suggests that its--repairoption be used only if you have been advised by "a developer or an experienced user".[73]As of August 2022, the SLE documentation recommends using a Live CD, performing a backup and only using the repair option as a last resort.[74] There is another tool, namedbtrfs-restore, that can be used to recover files from an unmountable filesystem, without modifying the broken filesystem itself (i.e., non-destructively).[75][76] In normal use, Btrfs is mostly self-healing and can recover from broken root trees at mount time, thanks to making periodic data flushes to permanent storage, by default every 30 seconds. Thus, isolated errors will cause a maximum of 30 seconds of filesystem changes to be lost at the next mount.[77]This period can be changed by specifying a desired value (in seconds) with thecommitmount option.[78][79] Ohad Rodeh's original proposal at USENIX 2007 noted thatB+ trees, which are widely used as on-disk data structures for databases, could not efficiently allow copy-on-write-based snapshots because its leaf nodes were linked together: if a leaf was copied on write, its siblings and parents would have to be as well, as wouldtheirsiblings and parents and so on until the entire tree was copied. He suggested instead a modifiedB-tree(which has no leaf linkage), with arefcountassociated to each tree node but stored in an ad hoc free map structure and certain relaxations to the tree's balancing algorithms to make them copy-on-write friendly. The result would be a data structure suitable for a high-performance object store that could perform copy-on-write snapshots, while maintaining goodconcurrency.[18] At Oracle later that year, Mason began work on a snapshot-capable file system that would use this data structure almost exclusively—not just for metadata and file data, but also recursively to track space allocation of the trees themselves. This allowed all traversal and modifications to be funneled through a single code path, against which features such as copy on write, checksumming and mirroring needed to be implemented only once to benefit the entire file system.[80] Btrfs is structured as several layers of such trees, all using the same B-tree implementation. The trees store genericitemssorted by a 136-bit key. The most significant 64 bits of the key are a uniqueobject id. The middle eight bits are an item type field: its use is hardwired into code as an item filter in tree lookups.Objectscan have multiple items of multiple types. The remaining (least significant) 64 bits are used in type-specific ways. Therefore, items for the same object end up adjacent to each other in the tree, grouped by type. By choosing certain key values, objects can further put items of the same type in a particular order.[80][4] Interior tree nodes are simply flat lists of key-pointer pairs, where the pointer is the logical block number of a child node. Leaf nodes contain item keys packed into the front of the node and item data packed into the end, with the two growing toward each other as the leaf fills up.[80] Within each directory, directory entries appear asdirectory items, whose least significant bits of key values are aCRC32Chash of their filename. Their data is alocation key, or the key of theinodeitem it points to. Directory items together can thus act as an index for path-to-inode lookups, but are not used for iteration because they are sorted by their hash, effectivelyrandomly permutingthem. This means user applications iterating over and opening files in a large directory would thus generate many more disk seeks between non-adjacent files—a notable performance drain in other file systems with hash-ordered directories such asReiserFS,[81]ext3 (with Htree-indexes enabled[82]) and ext4, all of which haveTEA-hashed filenames. To avoid this, each directory entry has adirectory index item, whose key value of the item is set to a per-directory counter that increments with each new directory entry. Iteration over these index items thus returns entries in roughly the same order as stored on disk. Files with hard links in multiple directories have multiple reference items, one for each parent directory. Files with multiple hard links in thesamedirectory pack all of the links' filenames into the same reference item. This was a design flaw that limited the number of same-directory hard links to however many could fit in a single tree block. (On the default block size of 4 KiB, an average filename length of 8 bytes and a per-filename header of 4 bytes, this would be less than 350.) Applications which made heavy use of multiple same-directory hard links, such asgit,GNUS,GMameandBackupPCwere observed to fail at this limit.[83]The limit was eventually removed[84](and as of October 2012 has been merged[85]pending release in Linux 3.7) by introducing spilloverextended reference itemsto hold hard link filenames which do not otherwise fit. File data is kept outside the tree inextents, which are contiguous runs of disk data blocks. Extent blocks default to 4 KiB in size, do not have headers and contain only (possibly compressed) file data. In compressed extents, individual blocks are not compressed separately; rather, the compression stream spans the entire extent. Files haveextent data itemsto track the extents which hold their contents. The item's key value is the starting byte offset of the extent. This makes for efficient seeks in large files with many extents, because the correct extent for any given file offset can be computed with just one tree lookup. Snapshots and cloned files share extents. When a small part of a large such extent is overwritten, the resulting copy-on-write may create three new extents: a small one containing the overwritten data, and two large ones with unmodified data on either side of the overwrite. To avoid having to re-write unmodified data, the copy-on-write may instead createbookend extents, or extents which are simply slices of existing extents. Extent data items allow for this by including an offset into the extent they are tracking: items for bookends are those with non-zero offsets.[4] Theextent allocation treeacts as an allocation map for the file system. Unlike other trees, items in this tree do not have object ids. They represent regions of space: their key values hold the starting offsets and lengths of the regions they represent. The file system divides its allocated space intoblock groupswhich are variable-sized allocation regions that alternate between preferring metadata extents (tree nodes) and data extents (file contents). The default ratio of data to metadata block groups is 1:2. They are intended to use concepts of theOrlov block allocatorto allocate related files together and resist fragmentation by leaving free space between groups. (Ext3 block groups, however, have fixed locations computed from the size of the file system, whereas those in Btrfs are dynamic and created as needed.) Each block group is associated with ablock group item. Inode items in the file system tree include a reference to their current block group.[4] Extent itemscontain a back-reference to the tree node or file occupying that extent. There may be multiple back-references if the extent is shared between snapshots. If there are too many back-references to fit in the item, they spill out into individualextent data reference items. Tree nodes, in turn, have back-references to their containing trees. This makes it possible to find which extents or tree nodes are in any region of space by doing a B-tree range lookup on a pair of offsets bracketing that region, then following the back-references. For relocating data, this allows an efficient upwards traversal from the relocated blocks to quickly find and fix all downwards references to those blocks, without having to scan the entire file system. This, in turn, allows the file system to efficiently shrink, migrate, and defragment its storage online. The extent allocation tree, as with all other trees in the file system, is copy-on-write. Writes to the file system may thus cause a cascade whereby changed tree nodes and file data result in new extents being allocated, causing the extent tree itself to change. To avoid creating afeedback loop, extent tree nodes which are still in memory but not yet committed to disk may be updated in place to reflect new copied-on-write extents. In theory, the extent allocation tree makes a conventionalfree-space bitmapunnecessary because the extent allocation tree acts as a B-tree version of aBSP tree. In practice, however, an in-memoryred–black treeofpage-sized bitmaps is used to speed up allocations. These bitmaps are persisted to disk (starting in Linux 2.6.37, via thespace_cachemount option[86]) as special extents that are exempt from checksumming and copy-on-write. CRC-32Cchecksums are computed for both data and metadata and stored aschecksum itemsin achecksum tree. There is room for 256 bits of metadata checksums and up to a full node (roughly 4 KB or more) for data checksums. Btrfs has provisions for additional checksum algorithms to be added in future versions of the file system.[37][87] There is one checksum item per contiguous run of allocated blocks, with per-block checksums packed end-to-end into the item data. If there are more checksums than can fit, they spill into another checksum item in a new leaf. If the file system detects a checksum mismatch while reading a block, it first tries to obtain (or create) a good copy of this block from another device – if internal mirroring or RAID techniques are in use.[88][89] Btrfs can initiate an online check of the entire file system by triggering a file system scrub job that is performed in the background. The scrub job scans the entire file system for integrity and automatically attempts to report and repair any bad blocks it finds along the way.[88][90] Anfsyncrequest commits modified data immediately to stable storage. fsync-heavy workloads (like adatabaseor avirtual machinewhose running OSfsyncsfrequently) could potentially generate a great deal of redundant write I/O by forcing the file system to repeatedly copy-on-write and flush frequently modified parts of trees to storage. To avoid this, a temporary per-subvolumelog treeis created tojournalfsync-triggered copies on write. Log trees are self-contained, tracking their own extents and keeping their own checksum items. Their items are replayed and deleted at the next full tree commit or (if there was a system crash) at the next remount. Block devicesare divided intophysical chunksof 1 GiB for data and 256 MiB for metadata.[91]Physical chunks across multiple devices can be mirrored or striped together into a singlelogical chunk. These logical chunks are combined into a single logical address space that the rest of the filesystem uses. Thechunk treetracks this by storing each device therein as adevice itemand logical chunks aschunk map items, which provide a forward mapping from logical to physical addresses by storing their offsets in the least significant 64 bits of their key. Chunk map items can be one of several different types: Nis the number of block devices still having free space when the chunk is allocated. If N is not large enough for the chosen mirroring/mapping, then the filesystem is effectively out of space. Defragmentation, shrinking, and rebalancing operations require extents to be relocated. However, doing a simple copy-on-write of the relocating extent will break sharing between snapshots and consume disk space. To preserve sharing, an update-and-swap algorithm is used, with a specialrelocation treeserving as scratch space for affected metadata. The extent to be relocated is first copied to its destination. Then, by following backreferences upward through the affected subvolume's file system tree, metadata pointing to the old extent is progressively updated to point at the new one; any newly updated items are stored in the relocation tree. Once the update is complete, items in the relocation tree are swapped with their counterparts in the affected subvolume, and the relocation tree is discarded.[93] All the file system's trees—including the chunk tree itself—are stored in chunks, creating a potentialbootstrappingproblem whenmountingthe file system. Tobootstrapinto a mount, a list of physical addresses of chunks belonging to the chunk and root trees are stored in thesuperblock.[94] Superblock mirrorsare kept at fixed locations:[95]64 KiB into every block device, with additional copies at 64 MiB, 256 GiB and 1 PiB. When a superblock mirror is updated, itsgeneration numberis incremented. At mount time, the copy with the highest generation number is used. All superblock mirrors are updated in tandem, except inSSDmode which alternates updates among mirrors to provide somewear levelling.
https://en.wikipedia.org/wiki/Btrfs
Incomputing, theexit status(alsoexit codeorexit value) of a terminatedprocessis an integer number that is made available to itsparent process(or caller). InDOS, this may be referred to as anerrorlevel. When computer programs are executed, theoperating systemcreates anabstract entitycalled aprocessin which the book-keeping for that program is maintained. In multitasking operating systems such asUnixorLinux, new processes can be created by active processes. The process that spawns another is called aparent process, while those created arechild processes. Child processes run concurrently with the parent process. The technique of spawning child processes is used to delegate some work to a child process when there is no reason to stop the execution of the parent. When the child finishes executing, it exits by calling theexitsystem call. This system call facilitates passing the exit status code back to the parent, which can retrieve this value using thewaitsystem call. The parent and the child can have an understanding about the meaning of the exit statuses. For example, it is common programming practice for a child process to return (exit with) zero to the parent signifying success. Apart from this return value from the child, other information like how the process exited, either normally or by asignalmay also be available to the parent process. The specific set of codes returned is unique to the program that sets it. Typically it indicates success or failure. The value of the code returned by the function or program may indicate a specific cause of failure. On many systems, the higher the value, the more severe the cause of the error.[1]Alternatively, each bit may indicate a different condition, with these beingevaluated by theoroperatortogether to give the final value; for example,fsckdoes this. Sometimes, if the codes are designed with this purpose in mind, they can be used directly as a branch index upon return to the initiating program to avoid additional tests. InAmigaOS,MorphOSandAROS, four levels are defined: Shell scriptstypically execute commands and capture their exit statuses. For the shell's purposes, a command which exits with a zero exit status has succeeded. A nonzero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command is terminated by a signal whose number is N, a shell sets the variable $? to a value greater than 128. Most shells use 128+N, while ksh93 uses 256+N. If a command is not found, the shell should return a status of 127. If a command is found but is not executable, the return status should be 126.[2]Note that this is not the case for all shells. If a command fails because of an error during expansion or redirection, the exit status is greater than zero. TheCprogramming language allows programs exiting or returning from themain functionto signal success or failure by returning an integer, or returning themacrosEXIT_SUCCESSandEXIT_FAILURE. On Unix-like systems these are equal to 0 and 1 respectively.[3]A C program may also use theexit()function specifying the integer status or exit macro as the first parameter. The return value frommainis passed to theexitfunction, which for values zero,EXIT_SUCCESSorEXIT_FAILUREmay translate it to "an implementation defined form" ofsuccessful terminationorunsuccessful termination.[citation needed] Apart from zero and the macrosEXIT_SUCCESSandEXIT_FAILURE, the C standard does not define the meaning of return codes. Rules for the use of return codes vary on different platforms (see the platform-specific sections). In DOS terminology, anerrorlevelis anintegerexitcodereturned by anexecutable programorsubroutine. Errorlevels typically range from 0 to 255.[4][5][6][7]InDOSthere are only 256 error codes available, butDR DOS 6.0and higher support 16-bit error codes at least inCONFIG.SYS.[6]With4DOSand DR-DOSCOMMAND.COM, exit codes (in batchjobs) can be set byEXITn[6]and (in CONFIG.SYS) throughERROR=n.[6] Exit statuses are often captured bybatch programsthroughIF ERRORLEVELcommands.[4][6]Multiuser DOSsupports a reservedenvironment variable%ERRORLVL%which gets automatically updated on return from applications. COMMAND.COM underDR-DOS 7.02and higher supports a similarpseudo-environment variable%ERRORLVL% as well as%ERRORLEVEL%. In CONFIG.SYS, DR DOS 6.0 and higher supportsONERRORto test the load status and return code of device drivers and the exit code of programs.[6] In Java, any method can callSystem.exit(int status), unless a security manager does not permit it. This will terminate the currently running Java Virtual Machine. "The argument serves as a status code; by convention, a nonzero status code indicates abnormal termination."[8] InOpenVMS, success is indicated by odd values and failure by even values. The value is a 32-bit integer with sub-fields: control bits, facility number, message number and severity. Severity values are divided between success (Success, Informational) and failure (Warning, Error, Fatal).[9] In Plan 9's C, exit status is indicated by a string passed to theexitsfunction, andfunction mainistype void. InUnixand otherPOSIX-compatible systems, the parent process can retrieve the exit status of a child process using thewait()family of system calls defined inwait.h.[10]Of these, thewaitid()[11]call retrieves the full exit status, but the olderwait()andwaitpid()[12]calls retrieve only the least significant 8 bits of the exit status. Thewait()andwaitpid()interfaces set astatusvalue of typeintpacked as abitfieldwith various types of child termination information. If the child terminated by exiting (as determined by theWIFEXITED()macro; the usual alternative being that it died from an uncaughtsignal),SUSspecifies that the low-order 8 bits of the exit status can be retrieved from the status value using theWEXITSTATUS()macro. In thewaitid()system call (added with SUSv1), the child exit status and other information are no longer in a bitfield but in the structure of typesiginfo_t.[13] POSIX-compatible systems typically use a convention of zero for success and nonzero for error.[14]Some conventions have developed as to the relative meanings of various error codes; for example GNU recommend that codes with the high bit set be reserved for serious errors.[3] BSD-derived OS's have defined an extensive set of preferred interpretations: Meanings for 15 status codes 64 through 78 are defined insysexits.h.[15]These historically derive fromsendmailand othermessage transfer agents, but they have since found use in many other programs.[16]It has been deprecated and its use is discouraged.[15] The Advanced Bash-Scripting Guide has some information on the meaning of non-0 exit status codes.[17] Microsoft Windowsuses 32-bit unsigned integers as exit codes,[18][19]although the command interpreter treats them as signed.[20] Exit codes are directly referenced, for example, by the command line interpreterCMD.exein theerrorlevelterminology inherited fromDOS. The.NET Frameworkprocesses and theWindows PowerShellrefer to it as theExitCodeproperty of theProcessobject.
https://en.wikipedia.org/wiki/Exit_status
Areceiver operating characteristic curve, orROC curve, is agraphical plotthat illustrates the performance of abinary classifiermodel (can be used for multi class classification as well) at varying threshold values. ROC analysis is commonly applied in the assessment of diagnostic test performance in clinical epidemiology. The ROC curve is the plot of thetrue positive rate(TPR) against thefalse positive rate(FPR) at each threshold setting. The ROC can also be thought of as a plot of thestatistical poweras a function of theType I Errorof the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity as a function offalse positive rate. Given that theprobability distributionsfor both true positive and false positive are known, the ROC curve is obtained as thecumulative distribution function(CDF, area under the probability distribution from−∞{\displaystyle -\infty }to the discrimination threshold) of the detection probability in they-axis versus the CDF of the false positive probability on thex-axis. ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnosticdecision making. The true-positive rate is also known assensitivityorprobability of detection.[1]The false-positive rate is also known as theprobability of false alarm[1]and equals (1 −specificity). The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.[2] The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").[3] It was soon introduced topsychologyto account for the perceptual detection of stimuli. ROC analysis has been used inmedicine,radiology,biometrics,forecastingofnatural hazards,[4]meteorology,[5]model performance assessment,[6]and other areas for many decades and is increasingly used inmachine learninganddata miningresearch. A classification model (classifierordiagnosis[7]) is amappingof instances between certain classes/groups. Because the classifier or diagnosis result can be an arbitraryreal value(continuous output), the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person hashypertensionbased on ablood pressuremeasure). Or it can be adiscreteclass label, indicating one of the classes. Consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n). There are four possible outcomes from a binary classifier. If the outcome from a prediction ispand the actual value is alsop, then it is called atrue positive(TP); however if the actual value isnthen it is said to be afalse positive(FP). Conversely, atrue negative(TN) has occurred when both the prediction outcome and the actual value aren, and afalse negative(FN) is when the prediction outcome isnwhile the actual value isp. To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but does not actually have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease. Consider an experiment fromPpositive instances andNnegative instances for some condition. The four outcomes can be formulated in a 2×2contingency tableorconfusion matrix, as follows: The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test. A ROC space is defined by FPR and TPR asxandyaxes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 −specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of aconfusion matrixrepresents one point in the ROC space. The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100%specificity(no false positives). The (0,1) point is also called aperfect classification. A random guess would give a point along a diagonal line (the so-calledline of no-discrimination) from the bottom left to the top right corners (regardless of the positive and negativebase rates).[16]An intuitive example of random guessing is a decision by flipping coins. As the size of the sample increases, a random classifier's ROC point tends towards the diagonal line. In the case of a balanced coin, it will tend to the point (0.5, 0.5). The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random); points below the line represent bad results (worse than random). Note that the output of a consistently bad predictor could simply be inverted to obtain a good predictor. Consider four prediction results from 100 positive and 100 negative instances: Plots of the four results above in the ROC space are given in the figure. The result of methodAclearly shows the best predictive power amongA,B, andC. The result ofBlies on the random guess line (the diagonal line), and it can be seen in the table that theaccuracyofBis 50%. However, whenCis mirrored across the center point (0.5,0.5), the resulting methodC′is even better thanA. This mirrored method simply reverses the predictions of whatever method or test produced theCcontingency table. Although the originalCmethod has negative predictive power, simply reversing its decisions leads to a new predictive methodC′which has positive predictive power. When theCmethod predictsporn, theC′method would predictnorp, respectively. In this manner, theC′test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line. In binary classification, the class prediction for each instance is often made based on acontinuous random variableX{\displaystyle X}, which is a "score" computed for the instance (e.g. the estimated probability in logistic regression). Given a threshold parameterT{\displaystyle T}, the instance is classified as "positive" ifX>T{\displaystyle X>T}, and "negative" otherwise.X{\displaystyle X}follows a probability densityf1(x){\displaystyle f_{1}(x)}if the instance actually belongs to class "positive", andf0(x){\displaystyle f_{0}(x)}if otherwise. Therefore, the true positive rate is given byTPR(T)=∫T∞f1(x)dx{\displaystyle {\mbox{TPR}}(T)=\int _{T}^{\infty }f_{1}(x)\,dx}and the false positive rate is given byFPR(T)=∫T∞f0(x)dx{\displaystyle {\mbox{FPR}}(T)=\int _{T}^{\infty }f_{0}(x)\,dx}. The ROC curve plots parametricallyTPR(T){\displaystyle {\mbox{TPR}}(T)}versusFPR(T){\displaystyle {\mbox{FPR}}(T)}withT{\displaystyle T}as the varying parameter. For example, imagine that the blood protein levels in diseased people and healthy people arenormally distributedwith means of 2g/dLand 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (green vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have. Several studies criticize certain applications of the ROC curve and its area under the curve as measurements for assessing binary classifications when they do not capture the information relevant to the application.[18][17][19][20][21] The main criticism to the ROC curve described in these studies regards the incorporation of areas with low sensitivity and lowspecificity(both lower than 0.5) for the calculation of the total area under the curve (AUC).,[19]as described in the plot on the right. According to the authors of these studies, that portion of area under the curve (with low sensitivity and low specificity) regards confusion matrices where binary predictions obtain bad results, and therefore should not be included for the assessment of the overall performance. Moreover, that portion of AUC indicates a space with high or low confusion matrix threshold which is rarely of interest for scientists performing a binary classification in any field.[19] Another criticism to the ROC and its area under the curve is that they say nothing about precision and negative predictive value.[17] A high ROC AUC, such as 0.9 for example, might correspond to low values of precision and negative predictive value, such as 0.2 and 0.1 in the [0, 1] range. If one performed a binary classification, obtained an ROC AUC of 0.9 and decided to focus only on this metric, they might overoptimistically believe their binary test was excellent. However, if this person took a look at the values of precision and negative predictive value, they might discover their values are low. The ROC AUC summarizes sensitivity and specificity, but does not inform regarding precision and negative predictive value.[17] Sometimes, the ROC is used to generate a summary statistic. Common versions are: However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm. The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').[26]In other words, when given one randomly selected positive instance and one randomly selected negative instance, AUC is the probability that the classifier will be able to tell which one is which. This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large thresholdT{\displaystyle T}has a lower value on thex-axis) whereX1{\displaystyle X_{1}}is the score for a positive instance andX0{\displaystyle X_{0}}is the score for a negative instance, andf0{\displaystyle f_{0}}andf1{\displaystyle f_{1}}are probability densities as defined in previous section. IfX0{\displaystyle X_{0}}andX1{\displaystyle X_{1}}follows two Gaussian distributions, thenA=Φ((μ1−μ0)/σ12+σ02){\displaystyle A=\Phi \left((\mu _{1}-\mu _{0})/{\sqrt {\sigma _{1}^{2}+\sigma _{0}^{2}}}\right)}. It can be shown that the AUC is closely related to theMann–Whitney U,[27][28]which tests whether positives are ranked higher than negatives. For a predictorf{\textstyle f}, an unbiased estimator of its AUC can be expressed by the followingWilcoxon-Mann-Whitneystatistic:[29] where1[f(t0)<f(t1)]{\textstyle {\textbf {1}}[f(t_{0})<f(t_{1})]}denotes anindicator functionwhich returns 1 iff(t0)<f(t1){\displaystyle f(t_{0})<f(t_{1})}otherwise return 0;D0{\displaystyle {\mathcal {D}}^{0}}is the set of negative examples, andD1{\displaystyle {\mathcal {D}}^{1}}is the set of positive examples. In the context ofcredit scoring, a rescaled version of AUC is often used: G1=2AUC−1{\displaystyle G_{1}=2\operatorname {AUC} -1}. G1{\displaystyle G_{1}}is referred to as Gini index or Gini coefficient,[30]but it should not be confused with themeasure of statistical dispersion that is also called Gini coefficient.G1{\displaystyle G_{1}}is a special case ofSomers' D. It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or the other system with probabilities proportional to the relative length of the opposite component of the segment.[31]It is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.[32] Themachine learningcommunity most often uses the ROC AUC statistic for model comparison.[33]This practice has been questioned because AUC estimates are quite noisy and suffer from other problems.[34][35][36]Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution,[37]and AUC has been linked to a number of other performance metrics such as theBrier score.[38] Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness[citation needed]or DeltaP are recommended.[23][39]These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is theMatthews correlation coefficient.[citation needed] Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known asInformedness,[citation needed]Certainty[23]and Gini Coefficient (in the single parameterization or single system case)[citation needed]all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response.[40]Bringing chance performance to 0 allows these alternative scales to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such asCohen KappaandFleiss Kappa.[citation needed][41] Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to computepartial AUC.[42]For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.[43]Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for thex-axis.[44] The ROC area under the curve is also calledc-statisticorc statistic.[45] TheTotal Operating Characteristic(TOC) also characterizes diagnostic ability while revealing more information than the ROC. For each threshold, ROC reveals two ratios, TP/(TP + FN) and FP/(FP + TN). In other words, ROC revealshitshits+misses{\displaystyle {\frac {\text{hits}}{{\text{hits}}+{\text{misses}}}}}andfalse alarmsfalse alarms+correct rejections{\displaystyle {\frac {\text{false alarms}}{{\text{false alarms}}+{\text{correct rejections}}}}}. On the other hand, TOC shows the total information in the contingency table for each threshold.[46]The TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. TOC also provides the popular AUC of the ROC.[47] These figures are the TOC and ROC curves using the same data and thresholds. Consider the point that corresponds to a threshold of 74. The TOC curve shows the number of hits, which is 3, and hence the number of misses, which is 7. Additionally, the TOC curve shows that the number of false alarms is 4 and the number of correct rejections is 16. At any given point in the ROC curve, it is possible to glean values for the ratios offalse alarmsfalse alarms+correct rejections{\displaystyle {\frac {\text{false alarms}}{{\text{false alarms}}+{\text{correct rejections}}}}}andhitshits+misses{\displaystyle {\frac {\text{hits}}{{\text{hits}}+{\text{misses}}}}}. For example, at threshold 74, it is evident that the x coordinate is 0.2 and the y coordinate is 0.3. However, these two values are insufficient to construct all entries of the underlying two-by-two contingency table. An alternative to the ROC curve is thedetection error tradeoff(DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against they-axis and the top left corner – which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot. Furthermore, DET graphs have the useful property of linearity and a linear threshold behavior for normal distributions.[48]The DET plot is used extensively in theautomatic speaker recognitioncommunity, where the name DET was first used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in perception studies halfway through the 20th century,[citation needed]where this was dubbed "double probability paper".[49] If astandard scoreis applied to the ROC curve, the curve will be transformed into a straight line.[50]This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memorystrength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear. The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[51]Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.[52] Another variable used isd'(d prime)(discussed above in "Other measures"), which can easily be expressed in terms of z-values. Althoughd' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.[53] The z-score of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.[54] The ROC curve was first used duringWorld War IIfor the analysis ofradar signalsbefore it was employed insignal detection theory.[55]Following theattack on Pearl Harborin 1941, the United States military began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals. For these purposes they measured the ability of a radar receiver operator to make these important distinctions, which was called the Receiver Operating Characteristic.[56] In the 1950s, ROC curves were employed inpsychophysicsto assess human (and occasionally non-human animal) detection of weak signals.[55]Inmedicine, ROC analysis has been extensively used in the evaluation ofdiagnostic tests.[57][58]ROC curves are also used extensively inepidemiologyandmedical researchand are frequently mentioned in conjunction withevidence-based medicine. Inradiology, ROC analysis is a common technique to evaluate new radiology techniques.[59]In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models. ROC curves are widely used in laboratory medicine to assess the diagnostic accuracy of a test, to choose the optimal cut-off of a test and to compare diagnostic accuracy of several tests. ROC curves also proved useful for the evaluation ofmachine learningtechniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classificationalgorithms.[60] ROC curves are also used in verification of forecasts in meteorology.[61] As mentioned ROC curves are critical toradaroperation and theory. The signals received at a receiver station, as reflected by a target, are often of very low energy, in comparison to thenoise floor. The ratio ofsignal to noiseis an important metric when determining if a target will be detected. This signal to noise ratio is directly correlated to the receiver operating characteristics of the whole radar system, which is used to quantify the ability of a radar system. Consider the development of a radar system. A specification for the abilities of the system may be provided in terms of probability of detect,PD{\displaystyle P_{D}}, with a certain tolerance for false alarms,PFA{\displaystyle P_{FA}}. A simplified approximation of the required signal to noise ratio at the receiver station can be calculated by solving[62] for the signal to noise ratioX{\displaystyle {\mathcal {X}}}. Here,X{\displaystyle {\mathcal {X}}}is not indecibels, as is common in many radar applications. Conversion to decibels is throughXdB=10log10⁡X{\displaystyle {\mathcal {X}}_{dB}=10\log _{10}{\mathcal {X}}}. From this figure, the common entries in the radar range equation (with noise factors) may be solved, to estimate the requiredeffective radiated power. The extension of ROC curves for classification problems with more than two classes is cumbersome. Two common approaches for when there are multiple classes are (1) average over all pairwise AUC values[63]and (2) compute the volume under surface (VUS).[64][65]To average over all pairwise classes, one computes the AUC for each pair of classes, using only the examples from those two classes as if there were no other classes, and then averages these AUC values over all possible pairs. When there arecclasses there will bec(c− 1) / 2possible pairs of classes. The volume under surface approach has one plot a hypersurface rather than a curve and then measure the hypervolume under that hypersurface. Every possible decision rule that one might use for a classifier forcclasses can be described in terms of its true positive rates(TPR1, . . . , TPRc). It is this set of rates that defines a point, and the set of all possible decision rules yields a cloud of points that define the hypersurface. With this definition, the VUS is the probability that the classifier will be able to correctly label allcexamples when it is given a set that has one randomly selected example from each class. The implementation of a classifier that knows that its input set consists of one example from each class might first compute a goodness-of-fit score for each of thec2possible pairings of an example to a class, and then employ theHungarian algorithmto maximize the sum of thecselected scores over allc!possible ways to assign exactly one example to each class. Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves[66]and the Regression ROC (RROC) curves.[67]In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
https://en.wikipedia.org/wiki/Receiver_operating_characteristic
Classificationis the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example throughcluster analysis).[1]Examples include diagnostic tests, identifying spam emails and deciding whether to give someone a driving license. As well as 'category', synonyms or near-synonyms for 'class' include 'type', 'species', 'order', 'concept', 'taxon', 'group', 'identification' and 'division'. The meaning of the word 'classification' (and its synonyms) may take on one of several related meanings. It may encompass both classification and the creation of classes, as for example in 'the task of categorizing pages in Wikipedia'; this overall activity is listed undertaxonomy. It may refer exclusively to the underlying scheme of classes (which otherwise may be called a taxonomy). Or it may refer to the label given to an object by the classifier. Classification is a part of many different kinds of activities and is studied from many different points of view includingmedicine,philosophy,[2]law,anthropology,biology,taxonomy,cognition,communications,knowledge organization,psychology,statistics,machine learning,economicsandmathematics. Methodological work aimed at improving the accuracy of a classifier is commonly divided between cases where there are exactly two classes (binary classification) and cases where there are three or more classes (multiclass classification). Unlike indecision theory, it is assumed that a classifier repeats the classification task over and over. And unlike alottery, it is assumed that each classification can be either right or wrong; in the theory of measurement, classification is understood as measurement against anominalscale. Thus it is possible to try to measure the accuracy of a classifier. Measuring the accuracy of a classifier allows a choice to be made between two alternative classifiers. This is important both when developing a classifier and in choosing which classifier to deploy. There are however many different methods for evaluating the accuracy of a classifier and no general method for determining which method should be used in which circumstances. Different fields have taken different approaches, even in binary classification. Inpattern recognition, error rate is popular. TheGini coefficientand KS statistic are widely used in the credit scoring industry.Sensitivity and specificityare widely used in epidemiology and medicine.Precision and recallare widely used in information retrieval.[3] Classifier accuracy depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by theno-free-lunch theorem).
https://en.wikipedia.org/wiki/Classification
Folksonomyis aclassification systemin whichend usersapply publictagsto online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to ataxonomicclassification designed by the owners of thecontentand specified when it is published.[1][2]This practice is also known ascollaborative tagging,[3][4]social classification,social indexing, andsocial tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval",[5]but online sharing and interaction expanded it into collaborative forms.Social taggingis the application of tags in an open online environment where the tags of other users are available to others.Collaborative tagging(also known as group tagging) is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking. The term was coined byThomas Vander Walin 2004[5][6][7]as aportmanteauoffolkandtaxonomy. Folksonomies became popular as part ofsocial softwareapplications such associal bookmarkingand photograph annotation that enable users to collectively classify and find information via shared tags. Some websites includetag cloudsas a way to visualize tags in a folksonomy.[8] Folksonomies can be used forK–12education, business, and higher education. More specifically, folksonomies may be implemented for social bookmarking, teacher resource repositories, e-learning systems, collaborative learning, collaborative research, professional development and teaching.Wikipediais a prime example of folksonomy.[9][better source needed][clarification needed] Folksonomies are a trade-off between traditional centralized classification and no classification at all,[10]and have several advantages:[11][12][13] There are several disadvantages with the use of tags and folksonomies as well,[14]and some of the advantages can lead to problems. For example, the simplicity in tagging can result in poorly applied tags.[15]Further, while controlled vocabularies are exclusionary by nature,[16]tags are often ambiguous and overly personalized.[17]Users apply tags to documents in many different ways and tagging systems also often lack mechanisms for handlingsynonyms,acronymsandhomonyms, and they also often lack mechanisms for handlingspellingvariations such as misspellings,singular/pluralform,conjugatedandcompoundwords. Some tagging systems do not support tags consisting of multiple words, resulting in tags like "viewfrommywindow". Sometimes users choose specialized tags or tags without meaning to others. A folksonomy emerges when users tag content or information, such as web pages, photos, videos, podcasts, tweets, scientific papers and others. Strohmaier et al.[18]elaborate the concept: the term "tagging" refers to a "voluntary activity of users who are annotating resources with term-so-called 'tags' – freely chosen from an unbounded and uncontrolled vocabulary". Others explain tags as an unstructured textual label[19]or keywords,[17]and that they appear as a simple form of metadata.[20] Folksonomies consist of three basic entities: users, tags, and resources. Users create tags to mark resources such as: web pages, photos, videos, and podcasts. These tags are used to manage, categorize and summarize online content. This collaborative tagging system also uses these tags as a way to index information, facilitate searches and navigate resources. Folksonomy also includes a set of URLs that are used to identify resources that have been referred to by users of different websites. These systems also include category schemes that have the ability to organize tags at different levels of granularity.[21] Vander Wal identifies two types of folksonomy: broad and narrow.[22]A broad folksonomy arises when multiple users can apply the same tag to an item, providing information about which tags are the most popular. A narrow folksonomy occurs when users, typically fewer in number and often including the item's creator, tag an item with tags that can each be applied only once. While both broad and narrow folksonomies enable the searchability of content by adding an associated word or phrase to an object, a broad folksonomy allows for sorting based on the popularity of each tag, as well as the tracking of emerging trends in tag usage and developing vocabularies.[22] An example of a broad folksonomy isdel.icio.us, a website where users can tag any online resource they find relevant with their own personal tags. The photo-sharing websiteFlickris an oft-cited example of a narrow folksonomy. 'Taxonomy' refers to a hierarchicalcategorizationin which relatively well-defined classes are nested under broader categories. Afolksonomyestablishes categories (each tag is a category) without stipulating or necessarily deriving a hierarchical structure of parent-child relations among different tags. (Work has been done on techniques for deriving at least loose hierarchies from clusters of tags.[23]) Supporters of folksonomies claim that they are often preferable to taxonomies because folksonomies democratize the way information is organized, they are more useful to users because they reflect current ways of thinking about domains, and they express more information about domains.[24]Critics claim that folksonomies are messy and thus harder to use, and can reflect transient trends that may misrepresent what is known about a field. An empirical analysis of the complex dynamics of tagging systems, published in 2007,[25]has shown that consensus around stable distributions and shared vocabularies does emerge, even in the absence of a centralcontrolled vocabulary. For content to be searchable, it should be categorized and grouped. While this was believed to require commonly agreed on sets of content describing tags (much like keywords of a journal article), some research has found that in large folksonomies common structures also emerge on the level of categorizations.[26]Accordingly, it is possible to devise mathematicalmodels of collaborative taggingthat allow for translating from personal tag vocabularies (personomies) to the vocabulary shared by most users.[27] Folksonomy is unrelated tofolk taxonomy, a cultural practice that has been widely documented in anthropological andfolkloristicwork. Folk taxonomies are culturally supplied, intergenerationally transmitted, and relatively stable classification systems that people in a given culture use to make sense of the entire world around them (not just theInternet).[21] The study of the structuring or classification of folksonomy is termedfolksontology.[28]This branch ofontologydeals with the intersection between highly structured taxonomies or hierarchies and loosely structured folksonomy, asking what best features can be taken by both for a system of classification. The strength of flat-tagging schemes is their ability to relate one item to others like it. Folksonomy allows large disparate groups of users to collaboratively label massive, dynamic information systems. The strength of taxonomies are their browsability: users can easily start from more generalized knowledge and target their queries towards more specific and detailed knowledge.[29]Folksonomy looks to categorize tags and thus create browsable spaces of information that are easy to maintain and expand. Social tagging forknowledge acquisitionis the specific use of tagging for finding and re-finding specific content for an individual or group. Social tagging systems differ from traditional taxonomies in that they are community-based systems lacking the traditional hierarchy of taxonomies. Rather than a top-down approach, social tagging relies on users to create the folksonomy from the bottom up.[30] Common uses of social tagging for knowledge acquisition include personal development for individual use and collaborative projects. Social tagging is used for knowledge acquisition in secondary, post-secondary, and graduate education as well as personal and business research. The benefits of finding/re-finding source information are applicable to a wide spectrum of users. Tagged resources are located through search queries rather than searching through a more traditional file folder system.[31]The social aspect of tagging also allows users to take advantage of metadata from thousands of other users.[30] Users choose individual tags for stored resources. These tags reflect personal associations, categories, and concept, all of which are individual representations based on meaning and relevance to that individual. The tags, or keywords, are designated by users. Consequently, tags represent a user's associations corresponding to the resource. Commonly tagged resources include videos, photos, articles, websites, and email.[32]Tags are beneficial for a couple of reasons. First, they help to structure and organize large amounts of digital resources in a manner that makes them easily accessible when users attempt to locate the resource at a later time. The second aspect is social in nature, that is to say that users may search for new resources and content based on the tags of other users. Even the act of browsing through common tags may lead to further resources for knowledge acquisition.[30] Tags that occur more frequently with specific resources are said to be more strongly connected. Furthermore, tags may be connected to each other. This may be seen in the frequency in which they co-occur. The more often they co-occur, the stronger the connection. Tag clouds are often utilized to visualize connectivity between resources and tags. Font size increases as the strength of association increases.[32] Tags show interconnections of concepts that were formerly unknown to a user. Therefore, a user's current cognitive constructs may be modified or augmented by the metadata information found in aggregated social tags. This process promotes knowledge acquisition through cognitive irritation and equilibration. This theoretical framework is known as the co-evolution model of individual and collective knowledge.[32] The co-evolution model focuses on cognitive conflict in which a learner's prior knowledge and the information received from the environment are dissimilar to some degree.[30][32]When this incongruence occurs, the learner must work through a process cognitive equilibration in order to make personal cognitive constructs and outside information congruent. According to the coevolution model, this may require the learner to modify existing constructs or simply add to them.[30]The additional cognitive effort promotes information processing which in turn allows individual learning to occur.[32]
https://en.wikipedia.org/wiki/Social_tagging
Volunteered geographic information(VGI) is the harnessing of tools to create, assemble, and disseminate geographic data provided voluntarily by individuals.[1][2]VGI is a special case of the larger phenomenon known asuser-generated content,[3]and allows people to have a more active role in activities such asurban planningand mapping.[4] VGI can be seen as an extension of critical and participatory approaches togeographic information systems.[5]Some examples of this phenomenon areWikiMapia,OpenStreetMap, andYandex.Map editor. These sites provide general base map information and allow users to create their own content by marking locations where various events occurred or certain features exist, but aren't already shown on the base map. Other examples include 311-style request systems[6]and 3D spatial technology.[7]Additionally, VGI commonly populates the content offered throughlocation-based servicessuch as the restaurant review siteYelp.[8] One of the most important elements of VGI in contrast to standard user-generated content is the geographic element, and its relationship withcollaborative mapping. The information volunteered by the individual is linked to a specific geographic region. While this is often taken to relate to elements of traditional cartography, VGI offers the possibility of including subjective, emotional, or other non-cartographic information.[9] Geo-referenced data produced within services such asTrip Advisor,Flickr,Twitter,[10]Instagram[11]andPanoramiocan be considered as VGI. VGI has attracted concerns aboutdata quality, and specifically about itscredibility[12]and the possibility ofvandalism.[13] The term VGI has been criticized for poorly representing common variations in the data ofOpenStreetMapand other sites: that some of the data is paid, in the case of CloudMade's ambassadors, or generated by another entity, as in US Census data.[14] Because it is gathered by individuals with no formal training, the quality and reliability of VGI is a topic of much debate.[15]Some methods of quality assurance have been tested, namely, the use of control data to verify VGI accuracy.[16] While there is concern over the authority of the data,[17]VGI may provide benefits beyond that of professional geographic information (PGI),[18][19]partly due to its ability to collect and present data not collected or curated by traditional/professional sources.[20][21][22]Additionally, VGI provides positive emotional value to users in functionality, satisfaction, social connection and ethics.[23][24]
https://en.wikipedia.org/wiki/Volunteered_geographic_information
Incomputing, theworking directoryof aprocessis adirectoryof ahierarchical file system, if any,[nb 1]dynamically associated with the process. It is sometimes called thecurrent working directory (CWD), e.g. theBSDgetcwd[1]function, or justcurrent directory.[2]When a process refers to a file using apaththat is arelative path, such as a path on aUnix-likesystem that does not begin with a/(forward slash) or a path onWindowsthat does not begin with a\(backward slash), the path is interpreted as relative to the process's working directory. So, for example a process on a Unix-like system with working directory/rabbit-shoesthat attempts to create the filefoo.txtwill end up creating the file/rabbit-shoes/foo.txt. In most computer file systems, every directory has an entry (usually named ".") which points to the directory itself. In mostDOSandUNIXcommand shells, as well as in theMicrosoft Windowscommand line interpreterscmd.exeandWindows PowerShell, the working directory can be changed by using theCDorCHDIRcommands. InUnix shells, thepwdcommand outputs a full pathname of the working directory; the equivalent command in DOS and Windows isCDorCHDIRwithoutarguments(whereas in Unix,cdused without arguments takes the user back to theirhome directory). Theenvironment variablePWD(in Unix/Linux shells), or thepseudo-environment variablesCD(in WindowsCOMMAND.COMandcmd.exe, but not in OS/2 and DOS), or_CWD,_CWDS,_CWPand_CWPS(under4DOS,4OS2,4NTetc.)[3]can be used in scripts, so that one need not start an external program.Microsoft Windowsfile shortcutshave the ability to store the working directory. COMMAND.COM inDR-DOS 7.02and higher providesECHOS, a variant of theECHOcommand omitting the terminating linefeed.[4][3]This can be used to create a temporary batchjob storing the working directory in an environment variable likeCDfor later use, for example: Alternatively, underMultiuser DOSandDR-DOS 7.02and higher, various internal and external commands support a parameter/B(for "Batch").[5]This modifies the output of commands to become suitable for direct command line input (when redirecting it into a batch file) or usage as a parameter for other commands (using it as input for another command). WhereCHDIRwould issue a directory path likeC:\DOS, a command likeCHDIR /Bwould issueCHDIR C:\DOSinstead, so thatCHDIR /B > RETDIR.BATwould create a temporary batchjob allowing to return to this directory later on. The working directory is also displayed by the$P[nb 2]token of thePROMPTcommand[6]To keep the prompt short even inside of deep subdirectory structures, the DR-DOS 7.07 COMMAND.COM supports a$W[nb 2]token to display only the deepest subdirectory level. So, where a defaultPROMPT $P$Gwould result f.e. inC:\DOS>orC:\DOS\DRDOS>, aPROMPT $N:$W$Gwould instead yieldC:DOS>andC:DRDOS>, respectively. A similar facility (using$Wand$w) was added to4DOSas well.[3] Under DOS, the absolute paths of the working directories of all logical volumes are internally stored in an array-like data structure called the Current Directory Structure (CDS), which gets dynamically allocated at boot time to hold the necessary number of slots for all logical drives (or as defined byLASTDRIVE).[7][8][9]This structure imposes a length-limit of 66 characters on the full path of each working directory, and thus implicitly also limits the maximum possible depth of subdirectories.[7]DOS Plusand older issues of DR DOS (up toDR DOS 6.0, withBDOS6.7 in 1991) had no such limitation[8][10][3]due to their implementation using aDOS emulationon top of aConcurrent DOS- (and thusCP/M-86-)derived kernel, which internally organized subdirectories asrelativelinks to parent directories instead of asabsolutepaths.[8][10]SincePalmDOS(with BDOS 7.0) and DR DOS 6.0 (1992 update with BDOS 7.1) and higher switched to use a CDS formaximum compatibilitywith DOS programs as well, they faced the same limitations as present in other DOSes.[8][10] Mostprogramming languagesprovide aninterfaceto thefile systemfunctions of the operating system, including the ability to set (change) the working directory of the program. In theC language, thePOSIXfunctionchdir()effects thesystem callwhich changes the working directory.[11]Its argument is atext stringwith a path to the new directory, either absolute or relative to the old one. Where available, it can be called by a process to set its working directory. There are similar functions in other languages. For example, inVisual Basicit is usually spelledCHDIR(). InJava, the working directory can be obtained through thejava.nio.file.Pathinterface, or through thejava.io.Fileclass. The working directory cannot be changed.[12]
https://en.wikipedia.org/wiki/Working_directory
TheWorld Association for Waterborne Transport Infrastructure(PIANC) is an international professional organisation founded in 1885.[1]PIANC’s mission today is to provide expert guidance and technical advice on technical, economic and environmental issues pertaining towaterborne transportinfrastructure, including the fields ofnavigablebodies of water(waterways), such ascanalsandrivers, as well asportsandmarinas. It is headquartered in Brussels in offices provided by the Flemish government of Belgium. Its earlier names were theAssociation Internationale Permanente des Congres de Navigation(AIPCN) until 1921, then as the Permanent International Association of Navigation Congresses (PIANC). It is additionally known as the International Navigation Association (French:Association Internationale de Navigation).[citation needed] On 25 May 1885, the first Inland Navigation Congress was held in Brussels, providing a forum for an international debate on these questions. After some years, the Inland Navigation Congress merged with the Ocean Navigation Congress and the International Navigation Congress was born. During the Congress in Paris, 1900, a Permanent International Commission for the Navigation Congresses was set up. Two years later statutes were adopted and PIANC was a born. Congresses are held every four years and are spread all over the world. In 1926 it was decided to release a Bulletin every half year to promote the contact between members. The Bulletin still exists, but now in the form of an electronic Magazine. In 1979 it was decided to establish permanent Technical Commissions and that Working Groups will be established under the umbrella of the Technical Commissions to study specific topics. The first Working Group report was issued in 1983 and over 100 others followed suit ([2]). PIANC changed considerably over the years, from an Association organising a Congress every four years, to an Association setting technical guidelines and publishing high ranking reports. The association is composed of qualifying members,[5]subscribing members and Honorary Members, who can be both legal and natural persons. The qualifying members of the association, having the right to vote in the general assembly, are: PIANC operates with a President, a General Secretary and Vice-Presidents representing the main areas of the world. The main decision-making body is the General Assembly. Membership is managed by National Sections and the PIANC Headquarters. PIANC organizes the World Congress every four years in different areas of the world. Other periodic main events are: - SMART Rivers, focused on inland navigation infrastructure, - COPEDEC congresses, that are held in countries in transition, - Mediterranean Days of Coastal and Port Engineering, focused on the Mediterranean area. The core business of PIANC are the Working Group Reports. Working Groups are composed of international experts that are tasked to prepare technical reports and guidelines in the fields of the waterborne navigation infrastructure design, construction and management.[6]The Working Groups are created and managed by four technical commissions: The other PIANC commissions are: Overall supervision of all Commissions is provided by the Executive Committee (ExCom). The Commissions and ExCom are also involved in the organization of congresses and events at an international level, the recognition of awards to the excellences in the fields of interest. The Inland Navigation Commission (InCom) is in charge of the activities of PIANC in the field of inland waterways in co-operation with MarCom, ReCom and EnviCom. InCom will network with other international organisations/commissions to reach PIANC’s strategic goals. Chairperson: (...-current) Mr Philippe Rigo (BE) The Environmental Commission (EnviCom) is responsible for dealing with broad and very specific environmental issues of interest to PIANC and representing PIANC in the international organisations dealing with these issues such as the London Convention, OSPAR, WODA and the EU. It is further recognised that the site-specific environmental impacts of inland navigation or maritime activities are partnered and/or dealt with respectively by InCom and MarCom. Furthermore, EnviCom is responsible for broad and generic environmental issues that cross-cut all PIANC areas. EnviCom is networking with other navigation related interests and has communications with non-traditional groups dealing with environmental affairs and training needs. The Commission also initiates efforts to enhance PIANC membership with environmental specialists. Chairperson: (2017–present) Mr Todd Bridges (US) The Maritime Navigation Commission (MarCom) is responsible for dealing with maritime ports and seaways issues of interest to PIANC. MarCom co-operates with other Commissions when issues can be seen to have a broader perspective, for example when they also have an environmental or inland impact. MarCom also co-operates and communicates with other international organisations such as IMO, IAPH, WODA, etc. Chairperson: (...-2019) Mr Francisco Esteban Lefler (ES) The Recreational Navigation Commission (RecCom) has been established to deal with aspects directly related to sport and recreational navigation, to develop aids for this kind of navigation and facilitate its integration among the other types of navigation (commercial and fishing). RecCom works closely together with other organisations such as theInternational Council of Marine Industry Associations(ICOMIA). RecCom is composed by experts coming from all the continents and manages working groups that develop the technical reports and guidelines. The Commission is responsible for the PIANC Marina Excellence Design "Jack Nichol" Award. The Commission started also the PIANC Marina Designer Training Program (MDTP), with the aim of organizing specific courses for marina design. Chairperson: (2017–present) Mr Esteban L. Biondi (AR) - (2009-2017) Mr Elio Ciralli (IT) - (2005-2009) Mr Marcello Conti (IT) - (1994-2005) Mr Caes Van der Wildt (NL) - (1982-1994) Mr P. Hofmann Frisch (DE). PIANC established and manage, through the Technical Commissions, the following international awards: The different PIANC awards have specific rules and regulations.
https://en.wikipedia.org/wiki/World_Association_for_Waterborne_Transport_Infrastructure
Inlogicandmathematics, thelogical biconditional, also known asmaterial biconditionalorequivalenceorbidirectional implicationorbiimplicationorbientailment, is thelogical connectiveused to conjoin two statementsP{\displaystyle P}andQ{\displaystyle Q}to form the statement "P{\displaystyle P}if and only ifQ{\displaystyle Q}" (often abbreviated as "P{\displaystyle P}iffQ{\displaystyle Q}"[1]), whereP{\displaystyle P}is known as theantecedent, andQ{\displaystyle Q}theconsequent.[2][3] Nowadays, notations to represent equivalence include↔,⇔,≡{\displaystyle \leftrightarrow ,\Leftrightarrow ,\equiv }. P↔Q{\displaystyle P\leftrightarrow Q}is logically equivalent to both(P→Q)∧(Q→P){\displaystyle (P\rightarrow Q)\land (Q\rightarrow P)}and(P∧Q)∨(¬P∧¬Q){\displaystyle (P\land Q)\lor (\neg P\land \neg Q)}, and theXNOR(exclusive NOR)Boolean operator, which means "both or neither". Semantically, the only case where a logical biconditional is different from amaterial conditionalis the case where the hypothesis (antecedent) is false but the conclusion (consequent) is true. In this case, the result is true for the conditional, but false for the biconditional.[2] In the conceptual interpretation,P=Qmeans "AllP's areQ's and allQ's areP's". In other words, the setsPandQcoincide: they are identical. However, this does not mean thatPandQneed to have the same meaning (e.g.,Pcould be "equiangular trilateral" andQcould be "equilateral triangle"). When phrased as a sentence, the antecedent is thesubjectand the consequent is thepredicateof auniversal affirmativeproposition (e.g., in the phrase "all men are mortal", "men" is the subject and "mortal" is the predicate). In the propositional interpretation,P↔Q{\displaystyle P\leftrightarrow Q}means thatPimpliesQandQimpliesP; in other words, the propositions arelogically equivalent, in the sense that both are either jointly true or jointly false. Again, this does not mean that they need to have the same meaning, asPcould be "the triangle ABC has two equal sides" andQcould be "the triangle ABC has two equal angles". In general, the antecedent is thepremise, or thecause, and the consequent is theconsequence. When an implication is translated by ahypothetical(orconditional) judgment, the antecedent is called thehypothesis(or thecondition) and the consequent is called thethesis. A common way of demonstrating a biconditional of the formP↔Q{\displaystyle P\leftrightarrow Q}is to demonstrate thatP→Q{\displaystyle P\rightarrow Q}andQ→P{\displaystyle Q\rightarrow P}separately (due to its equivalence to the conjunction of the two converseconditionals[2]). Yet another way of demonstrating the same biconditional is by demonstrating thatP→Q{\displaystyle P\rightarrow Q}and¬P→¬Q{\displaystyle \neg P\rightarrow \neg Q}. When both members of the biconditional are propositions, it can be separated into two conditionals, of which one is called atheoremand the other itsreciprocal.[citation needed]Thus whenever a theorem and its reciprocal are true, we have a biconditional. A simple theorem gives rise to an implication, whose antecedent is thehypothesisand whose consequent is thethesisof the theorem. It is often said that the hypothesis is thesufficient conditionof the thesis, and that the thesis is thenecessary conditionof the hypothesis. That is, it is sufficient that the hypothesis be true for the thesis to be true, while it is necessary that the thesis be true if the hypothesis were true. When a theorem and its reciprocal are true, its hypothesis is said to be thenecessary and sufficient conditionof the thesis. That is, the hypothesis is both the cause and the consequence of the thesis at the same time. Notations to represent equivalence used in history include: and so on. Somebody else also useEQ{\displaystyle \operatorname {EQ} }orEQV{\displaystyle \operatorname {EQV} }occasionally.[citation needed][vague][clarification needed] Logical equality(also known as biconditional) is anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif and only if both operands are false or both operands are true.[2] The following is a truth table forA↔B{\displaystyle A\leftrightarrow B}: When more than two statements are involved, combining them with↔{\displaystyle \leftrightarrow }might be ambiguous. For example, the statement may be interpreted as or may be interpreted as saying that allxiarejointly true or jointly false: As it turns out, these two statements are only the same when zero or two arguments are involved. In fact, the following truth tables only show the same bit pattern in the line with no argument and in the lines with two arguments: The left Venn diagram below, and the lines(AB    )in these matrices represent the same operation. Red areas stand for true (as inforand). ⇔¬{\displaystyle \Leftrightarrow \neg } A↔B↔C⇔{\displaystyle ~A\leftrightarrow B\leftrightarrow C~~\Leftrightarrow }A⊕B⊕C{\displaystyle ~A\oplus B\oplus C} ↔{\displaystyle \leftrightarrow }⇔{\displaystyle ~~\Leftrightarrow ~~} ⊕{\displaystyle \oplus }⇔{\displaystyle ~~\Leftrightarrow ~~} ∧{\displaystyle \land }⇔{\displaystyle ~~\Leftrightarrow ~~} Commutativity: Yes Associativity: Yes Distributivity:Biconditional doesn't distribute over any binary function (not even itself), butlogical disjunction distributesover biconditional. Idempotency: No Monotonicity: No Truth-preserving: YesWhen all inputs are true, the output is true. Falsehood-preserving: NoWhen all inputs are false, the output is not false. Walsh spectrum: (2,0,0,2) Nonlinearity: 0(the function is linear) Like all connectives in first-order logic, the biconditional has rules of inference that govern its use in formal proofs. Biconditional introduction allows one to infer that if B follows from A and A follows from B, then Aif and only ifB. For example, from the statements "if I'm breathing, then I'm alive" and "if I'm alive, then I'm breathing", it can be inferred that "I'm breathing if and only if I'm alive" or equivalently, "I'm alive if and only if I'm breathing." Or more schematically: Biconditional elimination allows one to infer aconditionalfrom a biconditional: if A↔B is true, then one may infer either A→B, or B→A. For example, if it is true that I'm breathingif and only ifI'm alive, then it's true thatifI'm breathing, then I'm alive; likewise, it's true thatifI'm alive, then I'm breathing. Or more schematically: One unambiguous way of stating a biconditional in plain English is to adopt the form "bifaandaifb"—if the standard form "aif and only ifb" is not used. Slightly more formally, one could also say that "bimpliesaandaimpliesb", or "ais necessary and sufficient forb". The plain English "if'" may sometimes be used as a biconditional (especially in the context of a mathematical definition[15]). In which case, one must take into consideration the surrounding context when interpreting these words. For example, the statement "I'll buy you a new wallet if you need one" may be interpreted as a biconditional, since the speaker doesn't intend a valid outcome to be buying the wallet whether or not the wallet is needed (as in a conditional). However, "it is cloudy if it is raining" is generally not meant as a biconditional, since it can still be cloudy even if it is not raining. This article incorporates material from Biconditional onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Logical_biconditional
Individualhuman mobilityis the study that describes how individual humans move within a network or system.[1]The concept has been studied in a number of fields originating in the study of demographics. Understanding human mobility has many applications in diverse areas, includingspread of diseases,[2][3]mobile viruses,[4]city planning,[5][6][7]traffic engineering,[8][9]financial market forecasting,[10]andnowcastingof economicwell-being.[11][12] In recent years, there has been a surge in large data sets available on human movements. These data sets are usually obtained fromcell phoneorGPSdata, with varying degrees of accuracy. For example, cell phone data is usually recorded whenever a call or a text message has been made or received by the user, and contains the location of the tower that the phone has connected to as well as the time stamp.[13]In urban areas, user and the telecommunication tower might be only a few hundred meters away from each other, while in rural areas this distance might well be in region of a few kilometers. Therefore, there is varying degree of accuracy when it comes to locating a person using cell phone data. These datasets are anonymized by the phone companies so as to hide and protect the identity of actual users. As example of its usage, researchers[13]used the trajectory of 100,000 cell phone users within a period of six months, while in much larger scale[14]trajectories of three million cell phone users were analyzed. GPS data are usually much more accurate even though they usually are, because ofprivacyconcerns, much harder to acquire. Massive amounts of GPS data describing human mobility are produced, for example, by on-board GPS devices on private vehicles.[15][16]The GPS device automatically turns on when the vehicle starts, and the sequence of GPS points the device produces every few seconds forms a detailed mobility trajectory of the vehicle. Some recent scientific studies compared the mobility patterns emerged from mobile phone data with those emerged from GPS data.[15][16][17] Researchers have been able to extract very detailed information about the people whose data are made available to public. This has sparked a great amount of concern about privacy issues. As an example of liabilities that might happen,New York Cityreleased 173 million individualtaxitrips. City officials used a veryweak cryptographyalgorithm to anonymize the license number and medallion number, which is an alphanumeric code assigned to each taxi cab.[18]This made it possible for hackers to completely de-anonymize the dataset, and even some were able to extract detailed information about specific passengers and celebrities, including their origin and destination and how much they tipped.[18][19] At the large scale, when the behaviour is modelled over a period of relatively long duration (e.g. more than one day), human mobility can be described by three major components: Brockmann,[20]by analysing banknotes, found that the probability of travel distance follows ascale-freerandom walkknown asLévy flightof formP(r)∼r−(1+β){\displaystyle P(r)\ \sim r^{-(1+\beta )}}whereβ=0.6{\displaystyle \beta =0.6}. This was later confirmed by two studies that used cell phone data[13]and GPS data to track users.[15]The implication of this model is that, as opposed to other more traditional forms of random walks such asbrownian motion, human trips tend to be of mostly short distances with a few long distance ones. In brownian motion, the distribution of trip distances are govern by a bell-shaped curve, which means that the next trip is of a roughly predictable size, the average, where in Lévy flight it might be an order of magnitude larger than the average. Some people are inherently inclined to travel longer distances than the average, and the same is true for people with lesser urge for movement. Radius of gyration is used to capture just that and it indicates the characteristic distance travelled by a person during a time period t.[13]Each user, within his radius of gyrationrg(t){\displaystyle r_{g}(t)}, will choose his trip distance according toP(r){\displaystyle P(r)}. The third component models the fact that humans tend to visit some locations more often than what would have happened under a random scenario. For example, home or workplace or favorite restaurants are visited much more than many other places in a user's radius of gyration. It has been discovered thatS(t)∼tμ{\displaystyle S(t)\ \sim t^{\mu }}whereμ=0.6{\displaystyle \mu =0.6}, which indicates a sublinear growth in different number of places visited by an individual . These three measures capture the fact that most trips happen between a limited number of places, with less frequent travels to places outside of an individual's radius of gyration. Although the human mobility is modeled as a random process, it is surprisingly predictable. By measuring the entropy of each person's movement, it has been shown[14]that there is a 93% potential predictability. This means that although there is a great variance in type of users and the distances that each of them travel, the overall characteristic of them is highly predictable. Implication of it is that in principle, it is possible to accurately model the processes that are dependent on human mobility patterns, such as disease or mobile virus spreading patterns.[21][22][23] On individual scale, daily human mobility can be explained by only 17Network motifs. Each individual, shows one of these motifs characteristically, over a period of several months. This opens up the possibility to reproduce daily individual mobility using a tractable analytical model[24] Infectious diseasesspread across the globe usually because of long-distance travels of carriers of the disease. These long-distance travels are made usingair transportationsystems and it has been shown that "network topology, traffic structure, and individual mobility patterns are all essential for accurate predictions of disease spreading".[21]On a smaller spatial scale the regularity of human movement patterns and its temporal structure should be taken into account in models of infectious disease spread.[25]Cellphone viruses that are transmitted via bluetooth are greatly dependent on the human interaction and movements. With more people using similar operating systems for their cellphones, it's becoming much easier to have a virus epidemic.[22] InTransportation Planning, leveraging the characteristics of human movement, such as tendency to travel short distances with few but regular bursts of long-distance trips, novel improvements have been made toTrip distributionmodels, specifically toGravity model of migration[26]
https://en.wikipedia.org/wiki/Individual_mobility
Inmathematics, there are two different notions of aring of sets, both referring to certainfamilies of sets. Inorder theory, a nonemptyfamily of setsR{\displaystyle {\mathcal {R}}}is called a ring (of sets) if it isclosedunderunionandintersection.[1]That is, the following two statements are true for all setsA{\displaystyle A}andB{\displaystyle B}, Inmeasure theory, a nonempty family of setsR{\displaystyle {\mathcal {R}}}is called a ring (of sets) if it is closed under union andrelative complement(set-theoretic difference).[2]That is, the following two statements are true for all setsA{\displaystyle A}andB{\displaystyle B}, This implies that a ring in the measure-theoretic sense always contains theempty set. Furthermore, for all setsAandB, which shows that a family of sets closed under relative complement is also closed under intersection, so that a ring in the measure-theoretic sense is also a ring in the order-theoretic sense. IfXis any set, then thepower setofX(the family of all subsets ofX) forms a ring of sets in either sense. If(X, ≤)is apartially ordered set, then itsupper sets(the subsets ofXwith the additional property that ifxbelongs to an upper setUandx≤y, thenymust also belong toU) are closed under both intersections and unions. However, in general it will not be closed under differences of sets. Theopen setsandclosed setsof anytopological spaceare closed under both unions and intersections.[1] On the real lineR, the family of sets consisting of the empty set and all finite unions of half-open intervals of the form(a,b], witha,b∈Ris a ring in the measure-theoretic sense. IfTis any transformation defined on a space, then the sets that are mapped into themselves byTare closed under both unions and intersections.[1] If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets.[1] A ring of sets in the order-theoretic sense forms adistributive latticein which the intersection and union operations correspond to the lattice'smeet and joinoperations, respectively. Conversely, every distributive lattice is isomorphic to a ring of sets; in the case offinitedistributive lattices, this isBirkhoff's representation theoremand the sets may be taken as the lower sets of a partially ordered set.[1] A family of sets closed under union and relative complement is also closed undersymmetric differenceand intersection. Conversely, every family of sets closed under both symmetric difference and intersection is also closed under union and relative complement. This is due to the identities Symmetric difference and intersection together give a ring in the measure-theoretic sense the structure of aboolean ring. In the measure-theoretic sense, aσ-ringis a ring closed undercountableunions, and aδ-ringis a ring closed under countable intersections. Explicitly, a σ-ring overX{\displaystyle X}is a setF{\displaystyle {\mathcal {F}}}such that for any sequence{Ak}k=1∞⊆F,{\displaystyle \{A_{k}\}_{k=1}^{\infty }\subseteq {\mathcal {F}},}we have⋃k=1∞Ak∈F.{\textstyle \bigcup _{k=1}^{\infty }A_{k}\in {\mathcal {F}}.} Given a setX,{\displaystyle X,}afield of sets− also called analgebra overX{\displaystyle X}− is a ring that containsX.{\displaystyle X.}This definition entails that an algebra is closed under absolute complementAc=X∖A.{\displaystyle A^{c}=X\setminus A.}Aσ-algebrais an algebra that is also closed under countable unions, or equivalently a σ-ring that containsX.{\displaystyle X.}In fact, byde Morgan's laws, a δ-ring that containsX{\displaystyle X}is necessarily a σ-algebra as well. Fields of sets, and especially σ-algebras, are central to the modern theory ofprobabilityand the definition ofmeasures. Asemiring (of sets)is a family of setsS{\displaystyle {\mathcal {S}}}with the properties Every ring (in the measure theory sense) is a semi-ring. On the other hand,S:={∅,{x},{y}}{\displaystyle {\mathcal {S}}:=\{\emptyset ,\{x\},\{y\}\}}onX={x,y}{\displaystyle X=\{x,y\}}is a semi-ring but not a ring, since it is not closed under unions. Asemialgebra[3]orelementary family[4]is a collectionS{\displaystyle {\mathcal {S}}}of subsets ofX{\displaystyle X}satisfying the semiring properties except with (3) replaced with: This condition is stronger than (3), which can be seen as follows. IfS{\displaystyle {\mathcal {S}}}is a semialgebra andE,F∈S{\displaystyle E,F\in {\mathcal {S}}}, then we can writeFc=F1∪…∪Fn{\displaystyle F^{c}=F_{1}\cup \ldots \cup F_{n}}for disjointFi∈S{\displaystyle F_{i}\in S}. Then:E∖F=E∩Fc=E∩(F1∪…∪Fn)=(E∩F1)∪…∪(E∩Fn){\displaystyle E\setminus F=E\cap F^{c}=E\cap (F_{1}\cup \ldots \cup F_{n})=(E\cap F_{1})\cup \ldots \cup (E\cap F_{n})} and everyE∩Fi∈S{\displaystyle E\cap F_{i}\in S}since it is closed under intersection, and disjoint since they are contained in the disjointFi{\displaystyle F_{i}}'s. Moreover the condition isstrictlystronger: anyS{\displaystyle S}that is both a ring and a semialgebra is an algebra, hence any ring that is not an algebra is also not a semialgebra (e.g. the collection of finite sets on an infinite setX{\displaystyle X}). Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
https://en.wikipedia.org/wiki/Ring_of_sets
Aninducement prize contest(IPC) is a competition that awards a cash prize for the accomplishment of a feat, usually of engineering. IPCs are typically designed to extend the limits of human ability. Some of the most famous IPCs include theLongitude prize(1714–1765), theOrteig Prize(1919–1927) and prizes from enterprises such asChallenge Worksand theX Prize Foundation. IPCs are distinct from recognition prizes, such as theNobel Prize, in that IPCs have prospectively defined criteria for what feat is to be achieved for winning the prize, while recognition prizes may be based on the beneficial effects of the feat. Throughout history, there have been instances where IPCs were successfully utilized to push the boundaries of what would have been considered state-of-the-art at the time.[1] The Longitude Prize was a reward offered by theBritishgovernment for a simple and practical method for the precise determination of a ship'slongitude. The prize, established through an Act of Parliament (theLongitude Act) in 1714, was administered by theBoard of Longitude. Another example happened during the first years of theNapoleonic Wars. The French government offered a hefty cash award of 12,000 francs to any inventor who could devise a cheap and effective method of preserving large amounts of food. The larger armies of the period required increased, regular supplies of quality food. Limited food availability was among the factors limiting military campaigns to the summer and autumn months. In 1809, a French confectioner and brewer,Nicolas Appert, observed that food cooked inside a jar did not spoil unless the seals leaked, and developed a method of sealing food in glass jars.[2]The reason for lack of spoilage was unknown at the time, since it would be another 50 years beforeLouis Pasteurdemonstrated the role of microbes in food spoilage. Yet another example is the Orteig Prize which was a $25,000 reward offered on May 19, 1919, by New York hotel ownerRaymond Orteigto the first allied aviator(s) to fly non-stop fromNew York CitytoParisor vice versa. On offer for five years, it attracted no competitors. Orteig renewed the offer for another five years in 1924 when the state of aviation technology had advanced to the point that numerous competitors vied for the prize. Several famous aviators made unsuccessful attempts at the New York–Paris flight before relatively unknown AmericanCharles Lindberghwon the prize in 1927 in hisaircraftSpirit of St. Louis. One of the leading organizations for IPCs isChallenge Works. This social enterprise, originating fromNesta (charity), uses IPCs, or 'Challenge Prizes', to catalyse innovative solutions to the world's largest problems. Their work includes the continuation ofLongitude rewards, for example, theLongitude Prize on Dementia, which seeks to useArtificial intelligenceand other emerging technologies to help those with dementia to manage their symptoms and live independently. Their work in social innovations revolves around 4 key pillars; Climate Response,Global health, Resilient Society and Technology Frontiers. They run a number ofinducement prizesand continue to conduct research intoareaswhere further innovations can make a positive difference. Another organization which develops and manages IPCs is the X PRIZE Foundation. Its mission is to bring about "radical breakthroughs for the benefit of humanity" through incentivized competition. It fosters high-profile competitions that motivate individuals, companies and organizations across all disciplines to develop innovative ideas and technologies that help solve the grand challenges that restrict humanity's progress. The most high-profile X PRIZE to date was theAnsari X PRIZErelating to spacecraft development awarded in 2004. This prize was intended to inspire research and development into technology for space exploration. Indeed, the X Prize has inspired other "letter" named inducement prize competitions such as theH-Prize,N-Prize, and so forth. In 2006, there was much interest in prizes for automotive achievement, such as the 250mpgcar.[citation needed] In Europe there has been a re-emergence of challenge prizes that following in the tradition of the Longitude Prize for solutions which impact on social problems. Nesta Challenges, based in London, is an example of this running prizes for innovations that for example reduce social isolation or make renewable energy generators accessible to off the grid refugees and returnees.[3] In some literature on the subject, it has been stated that well-designed IPCs can garner economic activity on the order of 10 to 20 times the amount of the prize face value.[citation needed] Inducement prizes have a long history as a policy tool for promoting innovation and solving various technical and societal challenges. These prizes offer a compensation reward, which can be in the form of monetary or non-monetary benefits, and aim to engage diverse groups of actors to develop solutions with low barriers to entry.[4]The primary objectives of inducement prizes are to direct research efforts and incentivize the creation of desired technologies. In recent years, national and regional policymakers have increasingly utilized inducement prizes to stimulate innovation. These prizes can be implemented at various territorial levels, such as supranational with H2020 prizes, national with the challenge.gov platform, or local with Tampere hackathons. Inducement prizes provide policy flexibility and a non-prescriptive approach that allows regional policymakers to also address specific societal challenges and concerns related to directionality, legitimacy, and responsibility[5].Overall, inducement prizes can be an effective policy tool with a challenge-oriented approach for addressing diverse societal challenges.[6]
https://en.wikipedia.org/wiki/Inducement_prize_contest
The Java programming language'sJava Collections Frameworkversion 1.5 and later defines and implements the original regular single-threaded Maps, and also new thread-safe Maps implementing thejava.util.concurrent.ConcurrentMapinterface among other concurrent interfaces.[1]In Java 1.6, thejava.util.NavigableMapinterface was added, extendingjava.util.SortedMap, and thejava.util.concurrent.ConcurrentNavigableMapinterface was added as a subinterface combination. The version 1.8 Map interface diagram has the shape below. Sets can be considered sub-cases of corresponding Maps in which the values are always a particular constant which can be ignored, although the Set API uses corresponding but differently named methods. At the bottom is thejava.util.concurrent.ConcurrentNavigableMap, which is a multiple-inheritance. For unordered access as defined in thejava.util.Mapinterface, thejava.util.concurrent.ConcurrentHashMapimplementsjava.util.concurrent.ConcurrentMap.[2]The mechanism is a hash access to a hash table with lists of entries, each entry holding a key, a value, the hash, and a next reference. Previous to Java 8, there were multiple locks each serializing access to a 'segment' of the table. In Java 8, native synchronization is used on the heads of the lists themselves, and the lists can mutate into small trees when they threaten to grow too large due to unfortunate hash collisions. Also, Java 8 uses the compare-and-set primitive optimistically to place the initial heads in the table, which is very fast. Performance isO(n), but there are delays occasionally when rehashing is necessary. After the hash table expands, it never shrinks, possibly leading to a memory 'leak' after entries are removed. For ordered access as defined by thejava.util.NavigableMapinterface,java.util.concurrent.ConcurrentSkipListMapwas added in Java 1.6,[1]and implementsjava.util.concurrent.ConcurrentMapand alsojava.util.concurrent.ConcurrentNavigableMap. It is aSkip listwhich uses Lock-free techniques to make a tree. Performance isO(log(n)). One problem solved by the Java 1.5java.util.concurrentpackage is that of concurrent modification. The collection classes it provides may be reliably used by multiple Threads. All Thread-shared non-concurrent Maps and other collections need to use some form of explicit locking such as native synchronization in order to prevent concurrent modification, or else there must be a way to prove from the program logic that concurrent modification cannot occur. Concurrent modification of aMapby multiple Threads will sometimes destroy the internal consistency of the data structures inside theMap, leading to bugs which manifest rarely or unpredictably, and which are difficult to detect and fix. Also, concurrent modification by one Thread with read access by another Thread or Threads will sometimes give unpredictable results to the reader, although the Map's internal consistency will not be destroyed. Using external program logic to prevent concurrent modification increases code complexity and creates an unpredictable risk of errors in existing and future code, although it enables non-concurrent Collections to be used. However, either locks or program logic cannot coordinate external threads which may come in contact with theCollection. In order to help with the concurrent modification problem, the non-concurrentMapimplementations and otherCollections use internal modification counters which are consulted before and after a read to watch for changes: the writers increment the modification counters. A concurrent modification is supposed to be detected by this mechanism, throwing ajava.util.ConcurrentModificationException,[3]but it is not guaranteed to occur in all cases and should not be relied on. The counter maintenance is also a performance reducer. For performance reasons, the counters are not volatile, so it is not guaranteed that changes to them will be propagated betweenThreads. One solution to the concurrent modification problem is using a particular wrapper class provided by a factory injava.util.Collections:public static<K,V> Map<K,V> synchronizedMap(Map<K,V> m)which wraps an existing non-thread-safeMapwith methods that synchronize on an internal mutex.[4]There are also wrappers for the other kinds of Collections. This is a partial solution, because it is still possible that the underlyingMapcan be inadvertently accessed byThreads which keep or obtain unwrapped references. Also, all Collections implement thejava.lang.Iterablebut the synchronized-wrapped Maps and other wrappedCollectionsdo not provide synchronized iterators, so the synchronization is left to the client code, which is slow and error prone and not possible to expect to be duplicated by other consumers of the synchronizedMap. The entire duration of the iteration must be protected as well. Furthermore, aMapwhich is wrapped twice in different places will have different internal mutex Objects on which the synchronizations operate, allowing overlap. The delegation is a performance reducer, but modern Just-in-Time compilers often inline heavily, limiting the performance reduction. Here is how the wrapping works inside the wrapper - the mutex is just a finalObjectand m is the final wrappedMap: The synchronization of the iteration is recommended as follows; however, this synchronizes on the wrapper rather than on the internal mutex, allowing overlap:[5] AnyMapcan be used safely in a multi-threaded system by ensuring that all accesses to it are handled by the Java synchronization mechanism: The code using ajava.util.concurrent.ReentrantReadWriteLockis similar to that for native synchronization. However, for safety, the locks should be used in a try/finally block so that early exit such asjava.lang.Exceptionthrowing or break/continue will be sure to pass through the unlock. This technique is better than using synchronization[6]because reads can overlap each other, there is a new issue in deciding how to prioritize the writes with respect to the reads. For simplicity ajava.util.concurrent.ReentrantLockcan be used instead, which makes no read/write distinction. More operations on the locks are possible than with synchronization, such astryLock()andtryLock(long timeout, TimeUnit unit). Mutual exclusion has alock convoyproblem, in which threads may pile up on a lock, causing the JVM to need to maintain expensive queues of waiters and to 'park' the waitingThreads. It is expensive to park and unpark aThreads, and a slow context switch may occur. Context switches require from microseconds to milliseconds, while the Map's own basic operations normally take nanoseconds. Performance can drop to a small fraction of a singleThread's throughput as contention increases. When there is no or little contention for the lock, there is little performance impact; however, except for the lock's contention test. Modern JVMs will inline most of the lock code, reducing it to only a few instructions, keeping the no-contention case very fast. Reentrant techniques like native synchronization orjava.util.concurrent.ReentrantReadWriteLockhowever have extra performance-reducing baggage in the maintenance of the reentrancy depth, affecting the no-contention case as well. The Convoy problem seems to be easing with modern JVMs, but it can be hidden by slow context switching: in this case, latency will increase, but throughput will continue to be acceptable. With hundreds ofThreads , a context switch time of 10ms produces a latency in seconds. Mutual exclusion solutions fail to take advantage of all of the computing power of a multiple-core system, because only oneThreadis allowed inside theMapcode at a time. The implementations of the particular concurrent Maps provided by the Java Collections Framework and others sometimes take advantage of multiple cores usinglock freeprogramming techniques. Lock-free techniques use operations like the compareAndSet() intrinsic method available on many of the Java classes such asAtomicReferenceto do conditional updates of some Map-internal structures atomically. The compareAndSet() primitive is augmented in the JCF classes by native code that can do compareAndSet on special internal parts of some Objects for some algorithms (using 'unsafe' access). The techniques are complex, relying often on the rules of inter-thread communication provided by volatile variables, the happens-before relation, special kinds of lock-free 'retry loops' (which are not like spin locks in that they always produce progress). The compareAndSet() relies on special processor-specific instructions. It is possible for any Java code to use for other purposes the compareAndSet() method on various concurrent classes to achieve Lock-free or even Wait-free concurrency, which provides finite latency. Lock-free techniques are simple in many common cases and with some simple collections like stacks. The diagram indicates how synchronizing usingCollections.synchronizedMap(java.util.Map)wrapping a regular HashMap (purple) may not scale as well as ConcurrentHashMap (red). The others are the ordered ConcurrentNavigableMaps AirConcurrentMap (blue) and ConcurrentSkipListMap (CSLM green). (The flat spots may be rehashes producing tables that are bigger than the Nursery, and ConcurrentHashMap takes more space. Note y axis should say 'puts K'. System is 8-core i7 2.5 GHz, with -Xms5000m to prevent GC). GC and JVM process expansion change the curves considerably, and some internal lock-Free techniques generate garbage on contention. Yet another problem with mutual exclusion approaches is that the assumption of complete atomicity made by some single-threaded code creates sporadic unacceptably long inter-Thread delays in a concurrent environment. In particular, Iterators and bulk operations like putAll() and others can take a length of time proportional to the Map size, delaying otherThreads that expect predictably low latency for non-bulk operations. For example, a multi-threaded web server cannot allow some responses to be delayed by long-running iterations of other threads executing other requests that are searching for a particular value. Related to this is the fact thatThreads that lock theMapdo not actually have any requirement ever to relinquish the lock, and an infinite loop in the ownerThreadmay propagate permanent blocking to otherThreads . Slow ownerThreads can sometimes be Interrupted. Hash-based Maps also are subject to spontaneous delays during rehashing. Thejava.util.concurrentpackages' solution to the concurrent modification problem, the convoy problem, the predictable latency problem, and the multi-core problem includes an architectural choice called weak consistency. This choice means that reads likeget(java.lang.Object)will not block even when updates are in progress, and it is allowable even for updates to overlap with themselves and with reads. Weak consistency allows, for example, the contents of aConcurrentMapto change during an iteration of it by a singleThread.[7]The Iterators are designed to be used by oneThreadat a time. So, for example, aMapcontaining two entries that are inter-dependent may be seen in an inconsistent way by a readerThreadduring modification by anotherThread. An update that is supposed to change the key of an Entry (k1,v) to an Entry (k2,v) atomically would need to do a remove(k1) and then a put(k2, v), while an iteration might miss the entry or see it in two places. Retrievals return the value for a given key that reflectsthe latest previous completedupdate for that key. Thus there is a 'happens-before' relation. There is no way forConcurrentMaps to lock the entire table. There is no possibility ofConcurrentModificationExceptionas there is with inadvertent concurrent modification of non-concurrentMaps. Thesize()method may take a long time, as opposed to the corresponding non-concurrentMaps and other collections which usually include a size field for fast access, because they may need to scan the entireMapin some way. When concurrent modifications are occurring, the results reflect the state of theMapat some time, but not necessarily a single consistent state, hencesize(),isEmpty()andcontainsValue(java.lang.Object)may be best used only for monitoring. There are some operations provided byConcurrentMapthat are not inMap- which it extends - to allow atomicity of modifications. The replace(K, v1, v2) will test for the existence ofv1in the Entry identified byKand only if found, then thev1is replaced byv2atomically. The new replace(k,v) will do a put(k,v) only ifkis already in the Map. Also, putIfAbsent(k,v) will do a put(k,v) only ifkis not already in theMap, and remove(k, v) will remove the Entry for v only if v is present. This atomicity can be important for some multi-threaded use cases, but is not related to the weak-consistency constraint. ForConcurrentMaps, the following are atomic. m.putIfAbsent(k, v) is atomic but equivalent to: m.replace(k, v) is atomic but equivalent to: m.replace(k, v1, v2) is atomic but equivalent to: m.remove(k, v) is atomic but equivalent to: BecauseMapandConcurrentMapare interfaces, new methods cannot be added to them without breaking implementations. However, Java 1.8 added the capability for default interface implementations and it added to theMapinterface default implementations of some new methods getOrDefault(Object, V), forEach(BiConsumer), replaceAll(BiFunction), computeIfAbsent(K, Function), computeIfPresent(K, BiFunction), compute(K,BiFunction), and merge(K, V, BiFunction). The default implementations inMapdo not guarantee atomicity, but in theConcurrentMapoverriding defaults these useLock freetechniques to achieve atomicity, and existing ConcurrentMap implementations will automatically be atomic. The lock-free techniques may be slower than overrides in the concrete classes, so concrete classes may choose to implement them atomically or not and document the concurrency properties. It is possible to uselock-freetechniques with ConcurrentMaps because they include methods of asufficiently high consensus number, namely infinity, meaning that any number ofThreads may be coordinated. This example could be implemented with the Java 8 merge() but it shows the overall lock-free pattern, which is more general. This example is not related to the internals of the ConcurrentMap but to the client code's use of the ConcurrentMap. For example, if we want to multiply a value in the Map by a constant C atomically: The putIfAbsent(k, v) is also useful when the entry for the key is allowed to be absent. This example could be implemented with the Java 8 compute() but it shows the overall lock-free pattern, which is more general. The replace(k,v1,v2) does not accept null parameters, so sometimes a combination of them is necessary. In other words, ifv1is null, then putIfAbsent(k, v2) is invoked, otherwise replace(k,v1,v2) is invoked. The Java collections framework was designed and developed primarily byJoshua Bloch, and was introduced inJDK 1.2.[8]The original concurrency classes came fromDoug Lea's[9]collection package.
https://en.wikipedia.org/wiki/Java_ConcurrentMap#Lock-free_atomicity
Avigesimal(/vɪˈdʒɛsɪməl/vij-ESS-im-əl) orbase-20(base-score) numeral system is based ontwenty(in the same way in which thedecimal numeral systemis based onten).Vigesimalis derived from the Latin adjectivevicesimus, meaning 'twentieth'. In a vigesimalplacesystem, twenty individual numerals (or digit symbols) are used, ten more than in the decimal system. One modern method of finding the extra needed symbols is to writetenas the letter A, or A20, where the20meansbase20, to writenineteenas J20, and the numbers between with the corresponding letters of the alphabet. This is similar to the commoncomputer-sciencepractice of writinghexadecimalnumerals over 9 with the letters "A–F". Another less common method skips over the letter "I", in order to avoid confusion between I20aseighteenandone, so that the number eighteen is written as J20, and nineteen is written as K20. The number twenty is written as 1020. According to this notation: In the rest of this article below, numbers are expressed in decimal notation, unless specified otherwise. For example,10meansten,20meanstwenty. Numbers in vigesimal notation use the convention that I means eighteen and J means nineteen. As 20 is divisible by two and five and is adjacent to 21, the product of three and seven, thus covering the first four prime numbers, many vigesimal fractions have simple representations, whether terminating or recurring (although thirds are more complicated than in decimal, repeating two digits instead of one). In decimal, dividing by three twice (ninths) only gives one digit periods (⁠1/9⁠= 0.1111.... for instance) because 9 is the number below ten. 21, however, the number adjacent to 20 that is divisible by 3, is not divisible by 9. Ninths in vigesimal have six-digit periods. As 20 has the same prime factors as 10 (two and five), a fraction will terminate in decimalif and only ifit terminates in vigesimal. The prime factorization of twenty is 22× 5, so it is not aperfect power. However, its squarefree part, 5, is congruent to 1 (mod 4). Thus, according toArtin's conjecture on primitive roots, vigesimal has infinitely manycyclicprimes, but the fraction of primes that are cyclic is not necessarily ~37.395%. An UnrealScript program that computes the lengths of recurring periods of various fractions in a given set of bases found that, of the first 15,456 primes, ~39.344% are cyclic in vigesimal. Many cultures that use a vigesimal system count in fives to twenty, then count twenties similarly. Such a system is referred to asquinary-vigesimalby linguists. Examples includeGreenlandic,Iñupiaq,Kaktovik,Maya,Nunivak Cupʼig, andYupʼiknumerals.[1][2][3] Vigesimal systems are common in Africa, for example inYoruba.[4]While the Yoruba number system may be regarded as a vigesimal system, it is complex.[further explanation needed] There is some evidence of base-20 usage in theMāori languageof New Zealand with the suffixhoko-(i.e.hokowhitu,hokotahi).[citation needed] In several European languages likeFrenchandDanish,20is used as a base, at least with respect to the linguistic structure of the names of certain numbers (though a thoroughgoing consistent vigesimal system, based on the powers 20, 400, 8000 etc., is not generally used). Open Location Codeuses a word-safe version of base 20 for itsgeocodes. The characters in this alphabet were chosen to avoid accidentally forming words. The developers scored all possible sets of 20 letters in 30 different languages for likelihood of forming words, and chose a set that formed as few recognizable words as possible.[16]The alphabet is also intended to reduce typographical errors by avoiding visually similar digits, and is case-insensitive. This table shows theMaya numeralsand thenumber namesinYucatec Maya,Nahuatlin modern orthography and inClassical Nahuatl.
https://en.wikipedia.org/wiki/Vigesimal
Software testingis the act of checking whethersoftwaresatisfies expectations. Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1] Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs. Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43 Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8] A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss] Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed] Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Software testing is typically goal driven. Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix. Not all defects cause a failure. For example, a defect indead codewill not be considered a failure. A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12] A single defect may result in multiple failure symptoms. Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity. A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14] Testing can be categorized many ways.[15] Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21] There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24] Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24] Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24] Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis. The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]). Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT). While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:[32][34] Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36] 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36] Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40] Black box testing can be used to any level of testing although usually not at the unit level.[33] Component interface testing Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46] Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48] With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33] Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes. A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54] Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed] Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54] Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55] Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54] Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57] Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62] Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63] Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64] Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices. A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65] Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby libraryvcr. In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the termsoftware testerstarted to be used to denote a separate profession. Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67] Organizations that develop software, perform testing differently, but there are common patterns.[2] Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146 Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69] Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70] Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72] The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Software testing is used in association withverification and validation:[73] The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81 And, according to the ISO 9000 standard: The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability. There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness. A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection. Some of the majorsoftware testing controversiesinclude: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
https://en.wikipedia.org/wiki/Software_testing#Fuzz_testing
Insyntax,shiftingoccurs when two or moreconstituentsappearing on the same side of their commonheadexchange positions in a sense to obtain non-canonical order. The most widely acknowledged type of shifting isheavy NP shift,[1]but shifting involving a heavy NP is just one manifestation of the shifting mechanism. Shifting occurs in most if not all European languages, and it may in fact be possible in all natural languages includingsign languages.[2]Shifting is notinversion, and inversion is not shifting, but the two mechanisms are similar insofar as they are both present in languages like English that have relatively strict word order. The theoretical analysis of shifting varies in part depending on the theory of sentence structure that one adopts. If one assumes relatively flat structures, shifting does not result in adiscontinuity.[3]Shifting is often motivated by the relative weight of the constituents involved. The weight of a constituent is determined by a number of factors: e.g., number of words, contrastive focus, andsemanticcontent. Shifting is illustrated with the following pairs of sentences. The first sentence of each pair shows what can be considered canonical order, whereas the second gives an alternative order that results from shifting: The first sentence with canonical order, where the object noun phrase (NP) precedes the oblique prepositional phrase (PP), is marginal due to the relative 'heaviness' of the NP compared to the PP. The second sentence, which shows shifting, is better because it has the lighter PP preceding the much heavier NP. The following examples illustrate shifting with particle verbs: When the object of the particle verb is a pronoun, the pronoun must precede the particle, whereas when the object is an NP, the particle can precede the NP. Each of the two constituents involved is said to shift, whereby this shifting is motivated by the weight of the two relative to each other. In English verb phrases, heavier constituents tend to follow lighter constituents. The following examples illustrate shifting using pronouns, clauses, and PPs: When the pronoun appears, it is much lighter than the PP, so it precedes the PP. But if the full clause appears, it is heavier than the PP and can therefore follow it. Thesyntactic categoryof the constituents involved in shifting is not limited; they can even be of the same type, e.g. In the first pair, the shifted constituents are PPs, and in the second pair, the shifted constituents are NPs. The second pair illustrates again that shifting is often motivated by the relative weight of the constituents involved; the NPanyone who used Wikipediais heavier than the NPa cheater. The examples so far have shifting occurring in verb phrases. Shifting is not restricted to verb phrases. It can also occur, for instance, in NPs: These examples again illustrate shifting that is motivated by the relative weight of the constituents involved. The heavier of the two constituents prefers to appear further to the right. The example sentences above all have the shifted constituents appearing after theirhead(see below). Constituents that precede their head can also shift, e.g. Since the finite verb is viewed as the head of the clause in each case, these data allow an analysis in terms of shifting. The subject and modal adverb have swapped positions. In other languages that have manyhead-finalstructures, shifting in the pre-head domain is a common occurrence. If one assumes relatively flat structures, the analysis of many canonical instances of shifting is straightforward. Shifting occurs among two or more sister constituents that appear on the same side of their head.[4]The following trees illustrate the basic shifting constellation in aphrase structure grammar(= constituency grammar) first and in adependency grammarsecond: The two constituency-based trees show a flat VP that allows n-ary branching (as opposed to just binary branching). The two dependency-based trees show the same VP. Regardless of whether one chooses the constituency- or the dependency-based analysis, the important thing about these examples is the relative flatness of the structure. This flatness results in a situation where shifting does not necessitate adiscontinuity(i.e. no long distance dependency), for there can be no crossing lines in the trees. The following trees further illustrate the point: Again due to the flatness of structure, shifting does not result in a discontinuity. In this example, both orders are acceptable because there is little difference in the relative weight between the two constituents that switch positions. An alternative analysis of shifting is necessary in a constituency grammar that posits strictly binary branching structures.[5]The more layered binary branching structures would result in crossing lines in the tree, which means movement (or copying) is necessary to avoid these crossing lines. The following trees are (merely) representative of the type of analysis that one might assume given strictly binary branching structures: The analysis shown with the trees assumes binary branching and leftward movement only. Given these restrictions, two instances of movement might be necessary to accommodate the surface order seen in tree b. The material in light gray represents copies that must be deleted in the phonological component. This sort of analysis of shifting has been criticized byRay Jackendoff, among others.[6]Jackendoff and Culicover argue for an analysis like that shown with the flatter trees above, whereby heavy NP shift does not result from movement, but rather from a degree of optionality in the ordering of a verb's complements. The preferred order in English is for the direct object to follow the indirect object in a double-object construction, and for adjuncts to follow objects of all kinds; but if the direct object is "heavy", the opposite order may be preferred (since this leads to a more right-branching tree structure which is easier to process). From a generativist's perspective, amysteriousproperty of shifting is that in the case of ditransitive verbs, a shifted direct object prevents extraction of the indirect object viawh-movement: Some generativists use this example to argue against the hypothesis that shifting merely results from choice between alternative complement orders, a hypothesis that does not imply movement. Their analysis in terms of a strictly binary branching tree resulting from leftward movements would in turn be able to explain this restriction. However, there are at least two ways of countering this argument: 1) in case one wants to explain choice, if choice is assumed to be performed between possible orders, the impossible order is not in the linguistic potential and it cannot be chosen; however, 2) in case one wants to explain generation, that is, to explain how a linguistic potential comes to exist in a situation, one can explain this phenomenon in terms of generation and avoidance rules: for instance, one of the reasons for avoiding a wording in potentiality would be an ordering in which a preposition falls before a nominal group by accident as in the example above. In other words, from a functional perspective, we either recognise that these fake clauses are none of the clauses we can choose from (choice of possible clauses) or we say that they are generated and avoided because they might cause listeners to misunderstand what the speaker is saying (generation and avoidance).
https://en.wikipedia.org/wiki/Shifting_(syntax)
Recursionoccurs when the definition of a concept or process depends on a simpler or previous version of itself.[1]Recursion is used in a variety of disciplines ranging fromlinguisticstologic. The most common application of recursion is inmathematicsandcomputer science, where afunctionbeing defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur. A process that exhibits recursion isrecursive.Video feedbackdisplays recursive images, as does aninfinity mirror. In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties: For example, the following is a recursive definition of a person'sancestor. One's ancestor is either: TheFibonacci sequenceis another classic example of recursion: Many mathematical axioms are based upon recursive rules. For example, the formal definition of thenatural numbersby thePeano axiomscan be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number."[2]By this base case and recursive rule, one can generate the set of all natural numbers. Other recursively defined mathematical objects includefactorials,functions(e.g.,recurrence relations),sets(e.g.,Cantor ternary set), andfractals. There are various more tongue-in-cheek definitions of recursion; seerecursive humor. Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'.[3] To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, while the running of a procedure involves actually following the rules and performing the steps. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. When a procedure is thus defined, this immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete. Even if it is properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old, partially executed invocation of the procedure; this requires some administration as to how far various simultaneous instances of the procedures have progressed. For this reason, recursive definitions are very rare in everyday situations. LinguistNoam Chomsky, among many others, has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practical constraints such as the time available to utter one), can be explained as the consequence of recursion in natural language.[4][5] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence:Dorothy thinks witches are dangerous, in which the sentencewitches are dangerousoccurs in the larger one. So a sentence can be defined recursively (very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence. This is really just a special case of the mathematical definition of recursion. This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length:Dorothy thinks that Toto suspects that Tin Man said that.... There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another.[6]Over the years, languages in general have proved amenable to this kind of analysis. The generally accepted idea that recursion is an essential property of human language has been challenged byDaniel Everetton the basis of his claims about thePirahã language. Andrew Nevins, David Pesetsky and Cilene Rodrigues are among many who have argued against this.[7]Literaryself-referencecan in any case be argued to be different in kind from mathematical or logical recursion.[8] Recursion plays a crucial role not only in syntax, but also innatural language semantics. The wordand, for example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible,andis typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.[9] Arecursive grammaris aformal grammarthat contains recursiveproduction rules.[10] Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving acircular definitionorself-reference, in which the putative recursive step does not get closer to a base case, but instead leads to aninfinite regress. It is not unusual for such books to include a joke entry in their glossary along the lines of: A variation is found on page 269 in theindexof some editions ofBrian KernighanandDennis Ritchie's bookThe C Programming Language; the index entry recursively references itself ("recursion 86, 139, 141, 182, 202, 269"). Early versions of this joke can be found inLet's talk Lispby Laurent Siklóssy (published by Prentice Hall PTR on December 1, 1975, with a copyright date of 1976) and inSoftware Toolsby Kernighan and Plauger (published by Addison-Wesley Professional on January 11, 1976). The joke also appears inThe UNIX Programming Environmentby Kernighan and Pike. It did not appear in the first edition ofThe C Programming Language. The joke is part of thefunctional programmingfolklore and was already widespread in the functional programming community before the publication of the aforementioned books.[12][13] Another joke is that "To understand recursion, you must understand recursion."[11]In the English-language version of the Google web search engine, when a search for "recursion" is made, the site suggests "Did you mean:recursion."[14]An alternative form is the following, fromAndrew Plotkin:"If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer toDouglas Hofstadterthan you are; then ask him or her what recursion is." Recursive acronymsare other examples of recursive humor.PHP, for example, stands for "PHP Hypertext Preprocessor",WINEstands for "WINE Is Not an Emulator",GNUstands for "GNU's not Unix", andSPARQLdenotes the "SPARQL Protocol and RDF Query Language". The canonical example of a recursively defined set is given by thenatural numbers: In mathematical logic, thePeano axioms(or Peano postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematicianRichard Dedekindand by the Italian mathematicianGiuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions. Another interesting example is the set of all "provable" propositions in anaxiomatic systemthat are defined in terms of aproof procedurewhich is inductively (or recursively) defined as follows: Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating theCantor setis a subdivision rule, as isbarycentric subdivision. Afunctionmay be recursively defined in terms of itself. A familiar example is theFibonacci numbersequence:F(n) =F(n− 1) +F(n− 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this caseF(0) = 0 andF(1) = 1. Applying the standard technique ofproof by casesto recursively defined sets or functions, as in the preceding sections, yieldsstructural induction— a powerful generalization ofmathematical inductionwidely used to derive proofs inmathematical logicand computer science. Dynamic programmingis an approach tooptimizationthat restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is theBellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step). Inset theory, this is a theorem guaranteeing that recursively defined functions exist. Given a setX, an elementaofXand a functionf:X→X, the theorem states that there is a unique functionF:N→X{\displaystyle F:\mathbb {N} \to X}(whereN{\displaystyle \mathbb {N} }denotes the set of natural numbers including zero) such that for any natural numbern. Dedekind was the first to pose the problem of unique definition of set-theoretical functions onN{\displaystyle \mathbb {N} }by recursion, and gave a sketch of an argument in the 1888 essay "Was sind und was sollen die Zahlen?"[15] Take two functionsF:N→X{\displaystyle F:\mathbb {N} \to X}andG:N→X{\displaystyle G:\mathbb {N} \to X}such that: whereais an element ofX. It can be proved bymathematical inductionthatF(n) =G(n)for all natural numbersn: By induction,F(n) =G(n)for alln∈N{\displaystyle n\in \mathbb {N} }. A common method of simplification is to divide a problem into subproblems of the same type. As acomputer programmingtechnique, this is calleddivide and conquerand is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach isdynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached. A classic example of recursion is the definition of thefactorialfunction, given here inPythoncode: The function calls itself recursively on a smaller version of the input(n - 1)and multiplies the result of the recursive call byn, until reaching thebase case, analogously to the mathematical definition of factorial. Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is inparsersfor programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program. Recurrence relationsare equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., aclosed-form expression). Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually the simplicity of instructions. The main disadvantage is that the memory usage of recursive algorithms may grow very quickly, rendering them impractical for larger instances. Shapes that seem to have been created by recursive processes sometimes appear in plants and animals, such as in branching structures in which one large part branches out into two or more similar smaller parts. One example isRomanesco broccoli.[16] Authors use the concept ofrecursivityto foreground the situation in which specificallysocialscientists find themselves when producing knowledge about the world they are always already part of.[17][18]According to Audrey Alejandro, “as social scientists, the recursivity of our condition deals with the fact that we are both subjects (as discourses are the medium through which we analyse) and objects of the academic discourses we produce (as we are social agents belonging to the world we analyse).”[19]From this basis, she identifies in recursivity a fundamental challenge in the production of emancipatory knowledge which calls for the exercise ofreflexiveefforts: we are socialised into discourses and dispositions produced by the socio-political order we aim to challenge, a socio-political order that we may, therefore, reproduce unconsciously while aiming to do the contrary. The recursivity of our situation as scholars – and, more precisely, the fact that the dispositional tools we use to produce knowledge about the world are themselves produced by this world – both evinces the vital necessity of implementing reflexivity in practice and poses the main challenge in doing so. Recursion is sometimes referred to inmanagement scienceas the process of iterating through levels of abstraction in large business entities.[20]A common example is the recursive nature of managementhierarchies, ranging fromline managementtosenior managementviamiddle management. It also encompasses the larger issue ofcapital structureincorporate governance.[21] TheMatryoshka dollis a physical artistic example of the recursive concept.[22] Recursion has been used in paintings sinceGiotto'sStefaneschi Triptych, made in 1320. Its central panel contains the kneeling figure of Cardinal Stefaneschi, holding up the triptych itself as an offering.[23][24]This practice is more generally known as theDroste effect, an example of theMise en abymetechnique. M. C. Escher'sPrint Gallery(1956) is a print which depicts a distorted city containing a gallery whichrecursivelycontains the picture, and soad infinitum.[25] The filmInceptionhas colloquialized the appending of the suffix-ceptionto a noun to jokingly indicate the recursion of something.[26]
https://en.wikipedia.org/wiki/Recursion#In_art
Pair testingis asoftware developmenttechnique in which two team members work together at one keyboard to test thesoftware application. One does the testing and the other analyzes or reviews the testing. This can be done between onetesteranddeveloperorbusiness analystor between two testers with both participants taking turns at driving the keyboard.[1] This can be more related topair programmingandexploratory testingofagile software developmentwhere two team members are sitting together to test thesoftware application. This will help both the members to learn more about the application. This will narrow down the root cause of the problem while continuous testing. Developer can find out which portion of the source code is affected by the bug. This track can help to make the solid test cases and narrowing the problem for the next time. This is more applicable where the requirements and specifications are not very clear, the team is very new, and needs to learn the application behavior quickly. This follows the same principles of pair programming; the two team members should be in the same level.
https://en.wikipedia.org/wiki/Pair_testing
Anelectionis a formalgroup decision-makingprocess whereby apopulationchooses an individual or multiple individuals to holdpublic office. Elections have been the usual mechanism by which modernrepresentative democracyhas operated since the 17th century.[1]Elections may fill offices in thelegislature, sometimes in theexecutiveandjudiciary, and forregional and local government. This process is also used in many other private andbusinessorganisations, from clubs tovoluntary associationsandcorporations. The global use of elections as a tool for selecting representatives in modern representative democracies is in contrast with the practice in the democraticarchetype, ancientAthens, where the elections were considered anoligarchicinstitution and most political offices were filled usingsortition, also known as allotment, by which officeholders were chosen by lot.[1] Electoral reformdescribes the process of introducing fairelectoral systemswhere they are not in place, or improving the fairness or effectiveness of existing systems.Psephologyis the study of results and otherstatisticsrelating to elections (especially with a view to predicting future results). Election is the fact of electing, or being elected. Toelectmeans "to select or make a decision", and so sometimes other forms of ballot such asreferendumsare referred to as elections, especially in theUnited States. Elections were used as early in history asancient Greeceandancient Rome, and throughout theMedieval periodto select rulers such as theHoly Roman Emperor(seeimperial election) and thepope(seepapal election).[2] ThePalaKingGopala(ruledc.750s– 770s CE) in early medievalBengalwas elected by a group of feudal chieftains. Such elections were quite common in contemporary societies of the region.[3][4]In theChola Empire, around 920 CE, inUthiramerur(in present-dayTamil Nadu), palm leaves were used for selecting the village committee members. The leaves, with candidate names written on them, were put inside a mud pot. To select the committee members, a young boy was asked to take out as many leaves as the number of positions available. This was known as theKudavolaisystem.[5][6] The first recorded popular elections of officials to public office, by majority vote, where all citizens were eligible both to vote and to hold public office, date back to theEphorsofSpartain 754 BC, under themixed governmentof theSpartan Constitution.[7][8]Atheniandemocratic elections, where all citizens could hold public office, were not introduced for another 247 years, until the reforms ofCleisthenes.[9]Under the earlierSolonian Constitution(c.574 BC), all Athenian citizens were eligible to vote in the popular assemblies, on matters of law and policy, and as jurors, but only the three highest classes of citizens could vote in elections. Nor were the lowest of the four classes of Athenian citizens (as defined by the extent of their wealth and property, rather than by birth) eligible to hold public office, through the reforms ofSolon.[10][11]The Spartan election of the Ephors, therefore, also predates the reforms of Solon in Athens by approximately 180 years.[12] Questions ofsuffrage, especially suffrage for minority groups, have dominated the history of elections. Males, the dominant cultural group in North America and Europe, often dominated theelectorateand continue to do so in many countries.[2]Early elections in countries such as theUnited Kingdomandthe United Stateswere dominated bylandedorruling classmales.[2]By 1920 all Western European and North American democracies had universal adult male suffrage (except Switzerland) and many countries began to considerwomen's suffrage.[2]Despite legally mandated universal suffrage for adult males, political barriers were sometimes erected to prevent fair access to elections (seecivil rights movement).[2] Elections are held in a variety of political, organizational, and corporate settings. Many countries hold elections to select people to serve in their governments, but other types of organizations hold elections as well. For example, many corporations hold elections amongshareholdersto select aboard of directors, and these elections may be mandated bycorporate law.[13]In many places, an election to the government is usually a competition among people who have already won aprimary electionwithin apolitical party.[14]Elections within corporations and other organizations often use procedures and rules that are similar to those of governmental elections.[15] The question of who may vote is a central issue in elections. The electorate does not generally include the entire population; for example, many countries prohibit those who are under the age of majority from voting. All jurisdictions require a minimum age for voting. In Australia,Aboriginal peoplewere not given the right to vote until 1962 (see1967 referendum entry) and in 2010 the federal government removed the rights of prisoners serving for three years or more to vote (a large proportion of whom were Aboriginal Australians). Suffrage is typically only for citizens of the country, though further limits may be imposed. In the European Union, one can vote in municipal elections if one lives in the municipality and is an EU citizen; the nationality of the country of residence is not required. In some countries, voting is required by law. Eligible voters may be subject to punitive measures such as a fine for not casting a vote. In Western Australia, the penalty for a first time offender failing to vote is a $20.00 fine, which increases to $50.00 if the offender refused to vote prior.[16] Historically the size of eligible voters, the electorate, was small having the size of groups or communities of privileged men likearistocratsand men of a city (citizens). With the growth of the number of people withbourgeoiscitizen rights outside of cities, expanding the term citizen, the electorates grew to numbers beyond the thousands. Elections with an electorate in the hundred thousands appeared in the final decades of theRoman Republic, by extending voting rights to citizens outside of Rome with theLex Julia of 90 BC, reaching an electorate of 910,000 and estimatedvoter turnoutof maximum 10% in 70 BC,[17]only again comparable in size to thefirst elections of the United States. At the same time theKingdom of Great Britainhad in 1780 about 214,000 eligible voters, 3% of the whole population.[18]Naturalizationcan reshape the electorate of a country.[19] Arepresentative democracyrequires a procedure to govern nomination for political office. In many cases, nomination for office is mediated throughpreselectionprocesses in organized political parties.[20] Non-partisan systems tend to be different from partisan systems as concerns nominations. In adirect democracy, one type ofnon-partisan democracy, any eligible person can be nominated. Although elections were used in ancient Athens, in Rome, and in the selection of popes and Holy Roman emperors, the origins of elections in the contemporary world lie in the gradual emergence of representative government in Europe and North America beginning in the 17th century. In some systems no nominations take place at all, with voters free to choose any person at the time of voting—with some possible exceptions such as through a minimum age requirement—in the jurisdiction. In such cases, it is not required (or even possible) that the members of the electorate be familiar with all of the eligible persons, though such systems may involveindirect electionsat larger geographic levels to ensure that some first-hand familiarity among potential electees can exist at these levels (i.e., among the elected delegates). Electoral systems are the detailed constitutional arrangements and voting systems that convert the vote into a political decision. The first step is for voters to cast theballots, which may be simple single-choice ballots, but other types, such as multiple choice orranked ballotsmay also be used. Then the votes are tallied, for which variousvote counting systemsmay be used. and the voting system then determines the result on the basis of the tally. Most systems can be categorized as eitherproportional,majoritarianormixed. Among the proportional systems, the most commonly used areparty-list proportional representation(list PR) systems, among majoritarian arefirst-past-the-postelectoral system (single winnerplurality voting) and different methods of majority voting (such as the widely usedtwo-round system).Mixed systemscombine elements of both proportional and majoritarian methods, with some typically producing results closer to the former (mixed-member proportional) or the other (e.g.parallel voting). Many countries have growing electoral reform movements, which advocate systems such asapproval voting,single transferable vote,instant runoff votingor aCondorcet method; these methods are also gaining popularity for lesser elections in some countries where more important elections still use more traditional counting methods. While openness andaccountabilityare usually considered cornerstones of a democratic system, the act of casting a vote and the content of a voter's ballot are usually an important exception. Thesecret ballotis a relatively modern development, but it is now considered crucial in mostfree and fair elections, as it limits the effectiveness of intimidation. When elections are called, politicians and their supporters attempt to influence policy by competing directly for the votes of constituents in what are called campaigns. Supporters for a campaign can be either formally organized or loosely affiliated, and frequently utilizecampaign advertising. It is common for political scientists to attempt to predict elections viapolitical forecastingmethods. The most expensive election campaign included US$7 billion spent on the2012 United States presidential electionand is followed by the US$5 billion spent on the2014 Indian general election.[21] The nature of democracy is that elected officials are accountable to the people, and they must return to the voters at prescribed intervals to seek theirmandateto continue in office. For that reason, most democratic constitutions provide that elections are held at fixed regular intervals. In the United States, elections for public offices are typically held between every two and six years in most states and at the federal level, with exceptions for elected judicial positions that may have longer terms of office. There is a variety of schedules, for example, presidents: thePresident of Irelandis elected every seven years, thePresident of Russiaand thePresident of Finlandevery six years, thePresident of Franceevery five years,President of the United Statesevery four years. Predetermined or fixed election dates have the advantage of fairness and predictability. They tend to greatly lengthen campaigns, and makedissolving the legislature(parliamentary system) more problematic if the date should happen to fall at a time when dissolution is inconvenient (e.g. when war breaks out). Other states (e.g., theUnited Kingdom) only set maximum time in office, and the executive decides exactly when within that limit it will actually go to the polls. In practice, this means the government remains in power for close to its full term, and chooses an election date it calculates to be in its best interests (unless something special happens, such as amotion of no-confidence). This calculation depends on a number of variables, such as its performance in opinion polls and the size of its majority. Rolling electionsare elections in which allrepresentativesin a body are elected, but these elections are spread over a period of time rather than all at once. Examples are the presidentialprimariesin theUnited States,Elections to the European Parliament(where, due to differing election laws in each member state, elections are held on different days of the same week) and, due to logistics, general elections inLebanonandIndia. The voting procedure in theLegislative Assemblies of the Roman Republicare also a classical example. In rolling elections, voters have information about previous voters' choices. While in the first elections, there may be plenty of hopeful candidates, in the last rounds consensus on one winner is generally achieved. In today's context of rapid communication, candidates can put disproportionate resources into competing strongly in the first few stages, because those stages affect the reaction of latter stages. In many of the countries with weakrule of law, the most common reason why elections do not meet international standards of being "free and fair" is interference from the incumbent government.Dictatorsmay use the powers of the executive (police, martial law, censorship, physical implementation of the election mechanism, etc.) to remain in power despite popular opinion in favour of removal. Members of a particular faction in a legislature may use the power of the majority or supermajority (passing criminal laws, and defining the electoral mechanisms including eligibility and district boundaries) to prevent the balance of power in the body from shifting to a rival faction due to an election.[2] Non-governmental entities can also interfere with elections, through physical force, verbal intimidation, or fraud, which can result in improper casting or counting of votes. Monitoring for and minimizing electoral fraud is also an ongoing task in countries with strong traditions of free and fair elections. Problems that prevent an election from being "free and fair" take various forms.[22] The electorate may be poorly informed about issues or candidates due to lack offreedom of the press, lack of objectivity in the press due to state or corporate control, or lack of access to news and political media.Freedom of speechmay be curtailed by the state, favouring certain viewpoints or statepropaganda. Schedulingfrequent electionscan also lead tovoter fatigue. Gerrymandering,wasted votesand manipulatingelectoral thresholdscan prevent that all votes count equally. Exclusion of opposition candidates from eligibility for office, needlessly highnomination ruleson who may be a candidate, are some of the ways the structure of an election can be changed to favour a specific faction or candidate. Those in power may arrest or assassinate candidates, suppress or even criminalize campaigning, close campaign headquarters, harass or beat campaign workers, or intimidate voters with violence.Foreign electoral interventioncan also occur, with the United States interfering between 1946 and 2000 in 81 elections andRussiaor theSoviet Unionin 36.[23]In 2018 the most intense interventions, utilizing false information, were byChinainTaiwanand byRussiainLatvia; the next highest levels were in Bahrain, Qatar and Hungary.[24] This can include falsifying voter instructions,[25]violation of thesecret ballot,ballot stuffing, tampering with voting machines,[26]destruction of legitimately cast ballots,[27]voter suppression, voter registration fraud, failure to validate voter residency, fraudulent tabulation of results, and use of physical force or verbal intimation at polling places. Other examples include persuading candidates not to run, such as through blackmailing, bribery, intimidation or physical violence. Asham election, orshow election, is an election that is held purely for show; that is, without any significant political choice or real impact on the results of the election.[28] Sham elections are a common event indictatorial regimesthat feel the need to feign the appearance of publiclegitimacy. Published results usually show nearly 100%voter turnoutand high support (typically at least 80%, and close to 100% in many cases) for the prescribed candidates or for thereferendumchoice that favours thepolitical partyin power. Dictatorial regimes can also organize sham elections with results simulating those that might be achieved in democratic countries.[29] Sometimes, only one government-approved candidate is allowed to run in sham elections with no opposition candidates allowed, or opposition candidates are arrested on false charges (or even without any charges) before the election to prevent them from running.[30][31][32] Ballots may contain only one "yes" option, or in the case of a simple "yes or no" question, security forces oftenpersecutepeople who pick "no", thus encouraging them to pick the "yes" option. In other cases, those who vote receive stamps in their passport for doing so, while those who did not vote (and thus do not receive stamps) are persecuted asenemies of the people.[33][34] Sham elections can sometimes backfire against the party in power, especially if the regime believes they are popular enough to win without coercion, fraud or suppressing the opposition. The most famous example of this was the1990 Myanmar general election, in which the government-sponsoredNational Unity Partysuffered a landslide defeat by the oppositionNational League for Democracyand consequently, the results were annulled.[35] Examples of sham elections include: the1929and1934electionsinFascist Italy, the1942 general electioninImperial Japan, those inNazi Germany,East Germanyother than the election in 1990, the1940 elections of Stalinist "People's Parliaments"to legitimise theSoviet occupationofEstonia,LatviaandLithuania, those inEgyptunderGamal Abdel Nasser,Anwar Sadat,Hosni Mubarak, andAbdel Fattah el-Sisi, those inBangladeshunderSheikh Hasina, those inRussiaunderVladimir Putin,[36]those in Syria underHafez Al-Assadand his sonBashar Al-Assad, those inVenezuelaUnderHugo ChavezandNicolas Maduroand most Notably in2018and2024, the1928,1935,1942,1949,1951and1958 electionsin Portugal, those inIndonesiaduringNew Orderregime, those inBelarusand Most Notably in2020, the1991and2019 Kazakh presidential elections, those inNorth Korea,[37]the1995and2002 presidential referendumsinSaddam Hussein's Iraq. InMexico, all of the presidential elections from1929to1982are considered to be sham elections, as theInstitutional Revolutionary Party(PRI) and its predecessors governed the country in ade factosingle-party system without serious opposition, and they won all of the presidential elections in that period with more than 70% of the vote. The first seriously competitive presidential election in modern Mexican history was that of1988, in which for the first time the PRI candidate faced two strong opposition candidates, though it is believed that the government rigged the result. The first fair election was held in1994, though the opposition did not win until2000. A predetermined conclusion is permanently established by the regime throughsuppressionof the opposition,coercionof voters,vote rigging, reporting several votes received greater than the number of voters, outright lying, or some combination of these. In an extreme example,Charles D. B. KingofLiberiawas reported to have won by 234,000 votes in the1927 general election, a "majority" that was over fifteen times larger than the number of eligible voters.[38] Some scholars argue that the predominance of elections in modernliberal democraciesmasks the fact that they are actually aristocratic selection mechanisms[39]that deny each citizen an equal chance of holding public office. Such views were expressed as early as the time ofAncient GreecebyAristotle.[39]According to Frenchpolitical scientistBernard Manin, the inegalitarian nature of elections stems from four factors: the unequal treatment of candidates by voters, the distinction of candidates required by choice, the cognitive advantage conferred by salience, and the costs of disseminating information.[40]These four factors result in the evaluation of candidates based on voters' partial standards of quality and social saliency (for example, skin colour and good looks). This leads to self-selection biases in candidate pools due to unobjective standards of treatment by voters and the costs (barriers to entry) associated with raising one's political profile. Ultimately, the result is the election of candidates who are superior (whether in actuality or as perceived within a cultural context) and objectively unlike the voters they are supposed to represent.[40] Evidence suggests that the concept of electing representatives was originally conceived to be different fromdemocracy.[41]Prior to the 18th century, some societies inWestern Europeusedsortitionas a means to select rulers, a method which allowed regular citizens to exercise power, in keeping with understandings of democracy at the time.[42]The idea of what constituted a legitimate government shifted in the 18th century to includeconsent, especially with the rise of theenlightenment. From this point onward, sortition fell out of favor as a mechanism for selecting rulers. On the other hand, elections began to be seen as a way for the masses to express popular consent repeatedly, resulting in the triumph of the electoral process until the present day.[43] This conceptual misunderstanding of elections as open and egalitarian when they are not innately so may thus be a root cause of theproblems in contemporary governance.[44]Those in favor of this view argue that the modern system of elections was never meant to give ordinary citizens the chance to exercise power - merely privileging their right to consent to those who rule.[45]Therefore, the representatives that modern electoral systems select for are too disconnected, unresponsive, and elite-serving.[39][46][47]To deal with this issue, various scholars have proposed alternative models of democracy, many of which include a return to sortition-based selection mechanisms. The extent to which sortition should be the dominant mode of selecting rulers[46]or instead be hybridised with electoral representation[48]remains a topic of debate.
https://en.wikipedia.org/wiki/Election
Net neutrality, sometimes referred to asnetwork neutrality, is the principle thatInternet service providers(ISPs) must treat all Internet communications equally, offeringusersand online content providers consistent transfer rates regardless of content, website,platform,application, type of equipment, source address, destination address, or method of communication (i.e., withoutprice discrimination).[4][5]Net neutrality was advocated for in the 1990s by the presidential administration ofBill Clintonin the United States. Clinton signed of theTelecommunications Act of 1996, an amendment to theCommunications Act of 1934.[6][7][better source needed]In 2025, an American court ruled that Internet companies should not be regulated like utilities, which weakened net neutrality regulation and put the decision in the hands of theUnited States Congressand state legislatures.[8] Supporters of net neutrality argue that it prevents ISPs from filtering Internet content without a court order, fostersfreedom of speechand democratic participation, promotes competition and innovation, prevents dubious services, and maintains theend-to-end principle, and that users would be intolerant of slow-loading websites. Opponents argue that it reduces investment, deters competition, increases taxes, imposes unnecessary regulations, prevents the Internet from being accessible to lower income individuals, and prevents Internet traffic from being allocated to the most needed users, that large ISPs already have a performance advantage over smaller providers, and that there is already significant competition among ISPs with few competitive issues. The term was coined byColumbia Universitymedia law professorTim Wuin 2003 as an extension of the longstanding concept of acommon carrierwhich was used to describe the role oftelephone systems.[9][10][11][12] Net neutrality regulations may be referred to ascommon carrierregulations.[13][14]Net neutrality does not block all abilities that ISPs have to impact their customers' services. Opt-in and opt-out services exist on the end user side, and filtering can be done locally, as in the filtering of sensitive material for minors.[15] Research suggests that a combination ofpolicy instrumentscan help realize the range of valued political and economic objectives central to the network neutrality debate.[16]Combined with public opinion, this has led some governments to regulate broadband Internet services as apublic utility, similar to the way electricity, gas, and the water supply are regulated, along with limiting providers and regulating the options those providers can offer.[17] Proponents of net neutrality, which includecomputer scienceexperts,consumer advocates,human rights organizations, and Internet content providers, assert that net neutrality helps to provide freedom of information exchange, promotes competition and innovation for Internet services, and upholds standardization of Internet data transmission which was essential for its growth.[citation needed]Opponents of net neutrality, which include ISPs, computer hardware manufacturers, economists,technologistsandtelecommunications equipment manufacturers, argue that net neutrality requirements would reduce their incentive to build out the Internet and reduce competition in the marketplace, and may raise their operating costs, which they would have to pass along to their users.[citation needed] Network neutrality is the principle that all Internet traffic should be treated equally.[18]According toColumbia Law SchoolprofessorTim Wu, a public information network will be most useful when this is the case.[19] Internet traffic consists of various types of digital data sent over the Internet between all kinds of devices (e.g., data center servers, personal computers,mobile devices,video game consoles, etc.), using hundreds of different transfer technologies. The data includes email messages;HTML,JSON, and all related web browserMIMEcontent types; text, word processing, spreadsheet, database and other academic, business or personal documents in any conceivable format;audioandvideofiles;streaming mediacontent; and countless other formal, proprietary, or ad-hocschematic formats—all transmitted via myriadtransfer protocols. Indeed, while the focus is often on thetypeof digital content being transferred, network neutrality includes the idea that if all suchtypesare to be treated equally, then it follows that any ostensibly arbitrary choice ofprotocol—that is, the technical details of the actual communications transaction itself—must be as well. For example, the same digital video file could be accessed by viewing it live while the data is being received (HLS), interacting with its playback from a remote server (DASH), by receiving it in an email message (SMTP), or by downloading it from either a website (HTTP), anFTPserver, or viaBitTorrent, among other means. Although all of these use the Internet for transport, and the content received locally is ultimately identical, the interim data traffic is dramatically different depending on which transfer method is used. To proponents of net neutrality, this suggests that prioritizing any one transfer protocol over another is generally unprincipled, or that doing so penalizes the free choices of some users. In sum, net neutrality is the principle that an ISP be required to provide access to all sites, content, and applications at the same speed, under the same conditions, without blocking or giving preference to any content. Under net neutrality, whether a user connects to Netflix, Wikipedia, YouTube, or a family blog, their ISP must treat them all the same.[20]Without net neutrality, an ISP can influence the quality that each experience offers to end users, which suggests a regime ofpay-to-play, where content providers can be charged to improve the exposure of their own products versus those of their competitors.[21] Under anopen Internetsystem, the full resources of the Internet and means to operate on it should be easily accessible to all individuals, companies, and organizations.[22]Applicable concepts include: net neutrality,open standards,transparency, lack ofInternet censorship, and lowbarriers to entry. The concept of the open Internet is sometimes expressed as an expectation ofdecentralized technological power, and is seen by some observers as closely related toopen-source software, a type of software program whose maker allows users access to the code that runs the program, so that users can improve the software or fixbugs.[23]Proponents of net neutrality see neutrality as an important component of anopen Internet, wherein policies such as equal treatment of data and openweb standardsallow those using the Internet to easily communicate, and conduct business and activities without interference from a third party.[24] In contrast, aclosed Internetrefers to the opposite situation, wherein established persons, corporations, or governments favor certain uses, restrict access to necessaryweb standards,artificially degradesome services, or explicitlyfilter out content. Some countries such asThailandblock certain websites or types of sites, and monitor and/or censor Internet use usingInternet police, a specialized type oflaw enforcement, orsecret police.[25]Other countries such as Russia,[26]China,[27]andNorth Korea[28]also use similar tactics to Thailand to control the variety of Internet media within their respective countries. In comparison to the United States or Canada for example, these countries have far more restrictive Internet service providers. This approach is reminiscent of aclosed platformsystem, as both ideas are highly similar.[29]These systems all serve to hinder access to a wide variety of Internet service, which is a stark contrast to the idea of an open Internet system. The termdumb pipewas coined in the early 1990s and refers to water pipes used in a city water supply system. In theory, these pipes provide a steady and reliable source of water to every household without discrimination. In other words, it connects the user with the source without any intelligence or decrement. Similarly, adumb networkis a network with little or no control or management of its use patterns.[30] Experts in thehigh-technology fieldwill often compare the dumb pipe concept withsmart pipesand debate which one is best applied to a certain portion of Internet policy. These conversations usually refer to these two concepts as being analogous to the concepts of open and closed Internet respectively. As such, certain models have been made that aim to outline four layers of the Internet with the understanding of the dumb pipe theory:[31] Theend-to-end principleofnetwork designwas first laid out in the 1981 paperEnd-to-end arguments in system designbyJerome H. Saltzer,David P. Reed, andDavid D. Clark.[32]The principle states that, whenever possible,communications protocoloperations should be defined to occur at the end-points of a communications system, or as close as possible to the resources being controlled. According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization; hence,TCPretransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached. They argued that, in addition to any processing in the intermediate systems, reliable systems tend to require processing in the end-points to operate correctly. They pointed out that most features in the lowest level of a communications system impose costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to re-implement the features on an end-to-end basis. This leads to the model of a minimaldumb networkwith smart terminals, a completely different model from the previous paradigm of the smart network withdumb terminals. Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality. The end-to-end principle is closely related and sometimes seen as a direct precursor to the principle of net neutrality.[33] Traffic shapingis the control ofcomputer networktraffic to optimize or guarantee performance, improvelatency(i.e., decrease Internet response times), or increase usablebandwidthby delayingpacketsthat meet certain criteria.[34]In practice, traffic shaping is often accomplished bythrottlingcertain types of data, such asstreaming videoorP2Pfile sharing. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) that imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile).[35]Traffic shaping provides a means to control the volume of traffic being sent into anetworkin a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such asgeneric cell rate algorithm. If the core of a network has more bandwidth than is permitted to enter at the edges, then good quality of service (QoS) can be obtained without policing or throttling. For example, telephone networks employ admission control to limit user demand on the network core by refusing to create a circuit for the requested connection. During anatural disaster, for example, most users will get acircuit busysignal if they try to make a call, as the phone company prioritizes emergency calls. Over-provisioning is a form ofstatistical multiplexingthat makes liberal estimates ofpeak user demand. Over-provisioning is used in private networks such asWebExand theInternet 2 Abilene Network, an American university network. David Isenberg believes that continued over-provisioning will always provide more capacity for less expense than QoS anddeep packet inspectiontechnologies.[36][37] Device neutralityis the principle that to ensure freedom of choice and freedom of communication for users of network-connected devices, it is not sufficient that network operators do not interfere with their choices and activities; users must be free to use applications of their choice and hence remove the applications they do not want. Device vendors can establish policies for managing applications, but they, too, must be applied neutrally.[citation needed] An unsuccessful bill to enforce network and device neutrality was introduced in Italy in 2015 byStefano Quintarelli.[38]The law gained formal support at the European Commission[39]from BEUC, theEuropean Consumer Organisation, theElectronic Frontier Foundationand theHermes Center for Transparency and Digital Human Rights.[citation needed]A similar law was enacted in South Korea.[40]Similar principles were proposed in China.[41]The French telecoms regulator ARCEP has called for the introduction of device neutrality in Europe.[42] The principle has been incorporated in the EU'sDigital Markets Act(Articles 6.3 an 6.4)[43][non-primary source needed] ISPs can choose a balance between a base subscription tariff (monthly bundle) and a pay-per-use (pay by MB metering). The ISP sets an upper monthly threshold on data usage, just to be able to provide an equal share among customers, and a fair use guarantee. This is generally not considered to be an intrusion but rather allows for a commercial positioning among ISPs.[citation needed] Some networks likepublic Wi-Fican take traffic away from conventionalfixedormobile networkproviders. This can significantly change the end-to-end behavior (performance, tariffs).[citation needed] Discrimination by protocol is the favoring or blocking of information based on aspects of thecommunications protocolthat the computers are using.[44]In the US, a complaint was filed with theFederal Communications Commissionagainst the cable providerComcastalleging they had illegally inhibited users of its high-speed Internet service from using the popular file-sharing softwareBitTorrent.[45]Comcast admitted no wrongdoing[46]in its proposed settlement of up toUS$16per share in December 2009.[47]However, a U.S. appeals court ruled in April 2010 that the FCC exceeded its authority when it sanctioned Comcast in 2008. However, the FCC spokeswoman Jen Howard responded, "The court in no way disagreed with the importance of preserving a free and open Internet, nor did it close the door to other methods for achieving this important end."[48]Despite the ruling in favor of Comcast, a study byMeasurement Labin October 2011 verified that Comcast had virtually stopped its BitTorrent throttling practices.[49][50] During the 1990s, creating a non-neutral Internet was technically infeasible.[51]Originally developed to filter harmfulmalware, the Internet security companyNetScreen Technologiesreleased networkfirewallsin 2003 with so-calleddeep packet inspectioncapabilities. Deep packet inspection helped make real-time discrimination between different kinds of data possible,[52]and is often used forInternet censorship. One criticism regarding discrimination is that the system set up by ISPs for this purpose is capable of not only discriminating but also scrutinizing the full-packet content of communications. For instance, deep packet inspection technology installs intelligence within the lower layers of the network to discover and identify the source, type, and destination of packets, revealing information about packets traveling in the physical infrastructure so it can dictate the quality of transport such packets will receive.[53]This is seen as an architecture ofsurveillance, one that can be shared withintelligence agencies, copyrighted content owners, and civil litigants, exposing the users' secrets in the process.[54] In a practice calledzero-rating, companies will not invoice data use related to certain IP addresses, favoring the use of those services. Examples includeFacebook Zero,[55]Wikipedia Zero, andGoogle Free Zone. These zero-rating practices are especially common in thedeveloping world.[56]Aside from the zero-rating method, ISPs will also use certain strategies to reduce the costs of pricing plans such as the use of sponsored data. In a scenario where a sponsored data plan is used, a third party will step in and pay for all the content that it (or the carrier or consumer) does not want around. This is generally used as a way for ISPs to removeout-of-pocketcosts from subscribers.[57] Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP's network. French telecom operator Orange, complaining that traffic from YouTube and other Google sites consist of roughly 50% of total traffic on the Orange network, made a deal with Google, in which they charge Google for the traffic incurred on the Orange network.[58]Some also thought that Orange's rival ISPFreethrottled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours.[59] Proponents of net neutrality argue that without new regulations, Internet service providers would be able to profit from and favor their own private networks and that ISPs would be able to pick and choose who they offer a greater bandwidth to. If one website or company is able to afford more, they will go with them. This especially stifles private up-and-coming businesses. ISPs are able to encourage the use of specific services by using private networks to discriminate what data is counted against bandwidth caps. For example, Comcast struck a deal with Microsoft that allowed users to stream television through the Xfinity app on theirXbox 360swithout it affecting their bandwidth limit. However, using other televisionstreamingapps, such asNetflix,HBO Go, andHulu, counted towards the limit. Comcast denied that this infringed on net neutrality principles since "it runs its Xfinity for Xbox service on its own, private Internet protocol network."[60]In 2009, whenAT&Twas bundlingiPhone 3Gwith its 3G network service, the company placed restrictions on which iPhone applications could run on its network.[61] According to net neutrality proponents, this capitalization on which content producers ISPs can favor would ultimately lead to fragmentation, where some ISPs would have certain content that is not necessarily present in the networks offered by other ISPs. The danger behind fragmentation, as viewed by proponents of net neutrality, is the concept that there could bemultiple Internets, where some ISPs offer exclusive Internet applications or services or make it more difficult to gain access to Internet content that may be more easily viewable through other Internet service providers. An example of a fragmented service would be television, where some cable providers offer exclusive media from certain content providers.[62] However, in theory, allowing ISPs to favor certain content and private networks would overall improve Internet services since they would be able to recognize packets of information that are more time-sensitive and prioritize that over packets that are not as sensitive to latency. The issue, as explained by Robin S. Lee and Tim Wu, is that there are literally too many ISPs and Internet content providers around the world to reach an agreement on how to standardize that prioritization. A proposed solution would be to allow all online content to be accessed and transferred freely, while simultaneously offering afast lanefor a preferred service that does not discriminate on the content provider.[62] There is disagreement about whetherpeeringis a net neutrality issue.[63]In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients.[64]This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of 2013, where average speeds dropped by over 25% of their values a year before to an all-time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection.Netflixagreed to a similar deal withVerizonin 2014, after VerizonDSLcustomers' connection speed dropped to less than 1 Mbit/s early in the year. Netflix spoke out against this deal with a controversial statement delivered to all Verizon customers experiencing low connection speeds, using the Netflix client.[65]This sparked an internal debate between the two companies that led to Verizon's obtaining acease and desistorder on 5 June 2014, that forced Netflix to stop displaying this message. Pro-net neutrality arguments have also noted that regulations are necessary due to research showing low tolerance to slow-loading content providers. In a 2009 research study conducted by Forrester Research, online shoppers expected the web pages they visited to download content instantly.[66]When a page fails to load at the expected speed, many of them simply click out. A study found that even a one-second delay could lead to "11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions."[67]This delay can cause a severe problem to small innovators who have created new technology. If a website is slow by default, the general public will lose interest and favor a website that runs faster. This helps large corporate companies maintain power because they have the means to fund faster Internet speeds.[68]On the other hand, smaller competitors have less financial capabilities making it harder for them to succeed in the online world.[69] Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking andthrottlingof Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites.[70]Contrary to popular rhetoric and statements by various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service tiering policy) cannot achieve the range of valued political and economic objectives central to the debate.[16]As Bauer and Obar suggest, "safeguarding multiple goals requires a combination of instruments that will likely involve government and nongovernment measures. Furthermore, promoting [rights and] goals such as thefreedom of speech, political participation, investment, and innovation calls for complementary policies."[71] Net neutrality is administered on a national or regional basis, though much of the world's focus has been on the conflict overnet neutrality in the United States. Net neutrality in the US has been a topic since the early 1990s, as they were one of the world leaders in providing online services. However, they face the same problems as the rest of the world. In 2019, the Save the Internet Act to "guarantee broadband internet users equal access to online content" was passed by the US House of Representatives[72]but not by the US Senate. Finding an appropriate solution by creating more regulations for ISPs has been a major work in progress. Net neutrality rules were repealed in the US in 2017 during the Trump administration and subsequent appeals upheld the ruling,[73]until the FCC voted to reinstate them in 2024.[74]In 2025, on January 2nd, however, "a US appeals court on Thursday ruled the Federal Communications Commission did not have the legal authority to reinstate landmark net neutrality rules."[75] Governments of countries that comment on net neutrality usually support the concept.[76] Net neutrality in the United States has been a point of conflict between network users and service providers since the 1990s. Much of the conflict over net neutrality arises from how Internet services are classified by the Federal Communications Commission (FCC) under the authority of theCommunications Act of 1934. The FCC would have significant ability to regulate ISPs should Internet services be treated as a Title II "common carrierservice", or otherwise the ISPs would be mostly unrestricted by the FCC if Internet services fell under Title I "information services". In 2009, the United States Congress passed the American Recovery and Reinvestment Act 2009, which granted a stimulus of $2.88 billion for extending broadband services into certain areas of the United States. It was intended to make the internet more accessible for under-served areas, and aspects of net neutrality and open access were written into the grant. However, the bill never set any significant precedents for net neutrality or influenced future legislation relating to net neutrality.[77]Until 2017, the FCC had generally been favorable towards net neutrality, treating ISPs under Title II common carrier. With the onset of thePresidency of Donald Trumpin 2017, and the appointment ofAjit Pai, an opponent of net neutrality, to the chairman of the FCC, the FCC has reversed many previous net neutrality rulings and reclassified Internet services as Title I information services.[78]The FCC's decisions have been a matter of several ongoing legal challenges by both states supporting net neutrality, and ISPs challenging it. The United States Congress has attempted to pass legislation supporting net neutrality but has failed to gain sufficient support. In 2018, a bill cleared the U.S. Senate, with RepublicansLisa Murkowski,John Kennedy, andSusan Collinsjoining all 49 Democrats but the House majority denied the bill a hearing.[79]Individual states have been trying to pass legislation to make net neutrality a requirement within their state, overriding the FCC's decision. California has successfully passed its ownnet neutrality act, which the United States Department of Justice challenged on a legal basis.[80]On 8 February 2021, the U.S. Justice Department withdrew its challenge to California's data protection law. Federal Communications Commission Acting ChairwomanJessica Rosenworcelvoiced support for an open Internet and restoring net neutrality.[81]Vermont, Colorado, and Washington, among other states, have also enacted net neutrality.[82] On 19 October 2023, the FCC voted 3–2 to approve a Notice of Proposed Rulemaking (NPRM) that seeks comments on a plan to restore net neutrality rules and regulation of Internet service providers.[83]On 25 April 2024, the FCC voted 3–2 to reinstate net neutrality in the United States by reclassifying the Internet under Title II.[84][85]However, legal challenges immediately filed by ISPs resulted in an appeals court issuing an order that stays the net neutrality rules until the court makes a final ruling, while issuing the opinion that the ISPs will likely prevail over the FCC on the merits.[86] On 2 January 2025, net neutrality rules, which disallow broadband providers from selectively interfering with Internet speeds depending on the accessed resource, were struck down byUS Court of Appeals for the Sixth CircuitinMCP No. 185.[87][88] Federal law shows that broadband must be classified as an "information service" and not the more heavily-regulated "telecommunications service" the FCC said it was when it adopted the rules in April 2024, a three-judge panel for the US Court of Appeals for the Sixth Circuit ruled. The FCC lacked the authority to impose its rules on the broadband providers, the court said.[89] According toBloomberg News, the Sixth Circuit's ruling is "one of the highest-profile examples" so far of an appeals court exercising the expanded authority followingLoper Bright Enters. v. Raimondo, which overturned a doctrine that had supported agency interpretations of ambiguous laws. The court also rejected a similar FCC classification for mobile broadband providers.[89] Net neutrality in Canadais a debated issue in that nation, but not to the degree of partisanship in other nations such as the United States in part because of its federal regulatory structure and pre-existing supportive laws that were enacted decades before the debate arose.[90]In Canada, ISPs generally provide Internet service in a neutral manner. Some notable incidents otherwise have includedBell Canada's throttling of certain protocols andTelus's censorship of a specific website supporting striking union members.[91]In the case with Bell Canada, the debate for net neutrality became a more popular topic when it was revealed that they were throttling traffic by limiting people's accessibility to viewCanada's Next Great Prime Minister, which eventually led to the Canadian Association of Internet Providers (CAIP) demanding theCanadian Radio-Television and Telecommunications Commission (CRTC)to take action on preventing the throttling of third-party traffic.[92]On 22 October 2009, the CRTC issued a ruling about Internet traffic management, which favored adopting guidelines that were suggested by interest groups such asOpenMedia.caand the Open Internet Coalition. However, the guidelines set in place require citizens to file formal complaints proving that their Internet traffic is being throttled, and as a result, some ISPs still continue to throttle the Internet traffic of their users.[92] In the year 2018, theIndian Governmentunanimously approved new regulations supporting net neutrality. The regulations are considered to be the "world's strongest" net neutrality rules, guaranteeing free and open Internet for nearly half a billion people,[93]and are expected to help the culture ofstartupsand innovation. The only exceptions to the rules are new and emerging services likeautonomous drivingandtele-medicine, which may require prioritized Internet lanes and faster than normal speeds.[94] Net neutrality in China is not enforced, and ISPs in China play important roles in regulating the content that is available domestically on the Internet. There are several ISPs filtering and blocking content at the national level, preventing domestic Internet users from accessing certain sites or services or foreign Internet users from gaining access to domestic web content. This filtering technology is referred to as theGreat Firewall, or GFW.[95] In an article published by the Cambridge University Press, they observed the political environment with net neutrality in China. Chinese ISPs have become a way for the country to control and restrict information rather than providing neutral Internet content for those who use the Internet.[96] Net neutrality in thePhilippinesis not enforced. Mobile Internet providers likeGlobe TelecomandSmart Communicationscommonly offer data package promos tied to specific applications, games or websites like Facebook,Instagram, andTikTok.[97][98][99] In the mid-2010s, Philippine telcos came under fire from theDepartment of Justicefor throttling the bandwidth of subscribers of unlimited data plans if the subscribers exceeded arbitrary data caps imposed by the telcos under a supposed "fair use policy" on their "unlimited" plans.[100]Certain adult sites likePornhub,Redtube, andXTubehave also been blocked by some Philippine ISPs at the request of thePhilippine National Policeto theNational Telecommunications Commission, even without the necessary court orders required by theSupreme Court of the Philippines.[101] Proponents of net neutrality regulations includeconsumer advocates, human rights organizations such asArticle 19,[102]online companies and some technology companies.[103]Net neutrality tends to be supported by those on thepolitical left, while opposed by those on thepolitical right.[104] Many major Internet application companies are advocates of neutrality, such aseBay,[105]Amazon,[105]Netflix,[106]Reddit,[106]Microsoft,[107]Twitter,[citation needed]Etsy,[108]IAC Inc.,[107]Yahoo!,[109]Vonage,[109]andCogent Communications.[110]In September 2014, an online protest known asInternet Slowdown Daytook place to advocate for the equal treatment of Internet traffic. Notable participants included Netflix and Reddit.[106] Consumer Reports,[111]theOpen Society Foundations[112]along with several civil rights groups, such as theACLU, theElectronic Frontier Foundation,Free Press,SaveTheInternet, andFight for the Futuresupport net neutrality.[113][106] Individuals who support net neutrality includeWorld Wide WebinventorTim Berners-Lee,[114]Vinton Cerf,[115][116]Lawrence Lessig,[117]Robert W. McChesney,[118]Steve Wozniak,Susan P. Crawford,Marvin Ammori,Ben Scott,David Reed,[119]and former U.S. PresidentBarack Obama.[120][121]On 10 November 2014, Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service to preserve net neutrality.[122][123][124]On 31 January 2015,AP Newsreported that the FCC will present the notion of applying ("with some caveats")Title II (common carrier)of theCommunications Act of 1934and section 706 of the Telecommunications act of 1996[125]to the Internet in a vote expected on 26 February 2015.[126][127][128][129][130] Supporters of net neutrality in the United States want to designatecable companiesascommon carriers, which would require them to allow ISPs free access to cable lines, the same model used fordial-upInternet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without acourt order.[131]Common carrier status would give the FCC the power to enforce net neutrality rules.[132]SaveTheInternet.comaccuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or do not load at all. According to SaveTheInternet.com, these companies want to charge content providers who require guaranteed speedy data delivery – to create advantages for their own search engines, Internet phone services, and streaming video services – and slowing access or blocking access to those of competitors.[133]Vinton Cerf, a co-inventor of theInternet Protocoland current vice president of Google, argues that the Internet was designed without any authorities controlling access to new content or new services.[134]He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online.[115]Cerf has also written about the importance of looking at problems like Net Neutrality through a combination of the Internet's layered system and the multistakeholder model that governs it.[135]He shows how challenges can arise that can implicate Net Neutrality in certain infrastructure-based cases, such as when ISPs enter into exclusive arrangements with large building owners, leaving the residents unable to exercise any choice in broadband provider.[136] Proponents of net neutrality argue that a neutral net will foster free speech and lead to further democratic participation on the Internet. FormerSenator Al Frankenfrom Minnesota fears that without new regulations, the major Internet Service Providers will use their position of power to stifle people's rights. He calls net neutrality the "First Amendmentissue of our time."[137]The past two decades has been an ongoing battle of ensuring that all people and websites have equal access to an unrestricted platform, regardless of their ability to pay, proponents of net neutrality wish to prevent the need to pay for speech and the further centralization of media power.[138]Lawrence LessigandRobert W. McChesneyargue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content.[117] Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the "fast lane" on the Internet would find theslow laneintolerable in comparison, greatly disadvantaging any provider who is unable to pay for thefast lane. Video providers Netflix[140]and Vimeo[141]in their comments to FCC in favor of net neutrality use the research[139]of S.S. Krishnan andRamesh Sitaramanthat provides the first quantitative evidence of adaptation to speed among online video users. Their research studied the patience level of millions of Internet video users who waited for a slow-loading video to start playing. Users who had faster Internet connectivity, such as fiber-to-the-home, demonstrated less patience and abandoned their videos sooner than similar users with slower Internet connectivity. The results demonstrate how users can get used to faster Internet connectivity, leading to higher expectations of Internet speed, and lower tolerance for any delay that occurs. AuthorNicholas Carr[142]and other social commentators[143][144]have written about the habituation phenomenon by stating that a faster flow of information on the Internet can make people less patient. Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position asgatekeepers.[145]Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay.[117]According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service.[146]Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, ortiered service, would put newer online companies at a disadvantage and slow innovation in online services.[103]Tim Wuargues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making.[146]SaveTheInternet.comargues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the Internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs.[133]Lawrence LessigandRobert W. McChesneyargue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive, near monopolistic companies, though there are multiple service providers in each region. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as healthcare, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality.[117]For example, back in 2005, YouTube was a small startup company. Due to the absence of Internet fast lanes, YouTube had the ability to grow larger than Google Video. Tom Wheeler and Senators Ronald Lee Wyden (D-Ore.) andAl Franken(D-Minn.) said, "Internet service providers treated YouTube's videos the same as they did Google's, and Google couldn't pay the ISPs [Internet service providers] to gain an unfair advantage, like a fast lane into consumers' homes," they wrote. "Well, it turned out that people liked YouTube a lot more than Google Video, so YouTube thrived."[147] The lack of competition among internet providers has been cited as a major reason to support net neutrality.[108]The loss of net neutrality in 2017 in the U.S. increased the calls for public broadband.[148] Net neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer's designed flexibility.[149] Some advocates say network neutrality is needed to maintain theend-to-end principle. According toLawrence LessigandRobert W. McChesney, all content must be treated the same and must move at the same speed for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good.[117]Under this principle, a neutral network is adumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper,The Rise of the Stupid Network. He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user devices; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type.[150] Contrary to this idea, the research paper titledEnd-to-end arguments in system designby Saltzer, Reed, and Clark argues thatnetwork intelligencedoes not relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for wholesale removal of intelligence from the network core.[151] Opponents of net neutrality regulations include ISPs, broadband and telecommunications companies, computer hardware manufacturers, economists, and notable technologists. Many of the major hardware and telecommunications companies specifically oppose the reclassification of broadband as acommon carrierunder Title II. Corporate opponents of this measure includeComcast,AT&T,Verizon,IBM,Intel,Cisco,Nokia,Qualcomm,Broadcom,Juniper,D-Link,Wintel,Alcatel-Lucent,Corning,Panasonic,Ericsson,Oracle,Akamai, and others.[152][153][154][155]TheUS Telecom and Broadband Association, which represents a diverse array of small and large broadband providers, is also an opponent.[156][157]A 2006 campaign against net neutrality was funded by AT&T and members includedBellSouth,Alcatel,Cingular, andCitizens Against Government Waste.[158][159][160][161][162] Nobel Memorial Prize-winning economistGary Becker's paper titled, "Net Neutrality and Consumer Welfare", published by theJournal of Competition Law & Economics, argues that claims by net neutrality proponents "do not provide a compelling rationale for regulation" because there is "significant and growing competition" among broadband access providers.[163][164]Google chairmanEric Schmidtstates that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt.[165][166]According to the Journal, when President Barack Obama announced his support for strong net neutrality rules late in 2014, Schmidt told a top White House official the president was making a mistake. Google once strongly advocated net-neutrality–like rules prior to 2010, but their support for the rules has since diminished; the company however still remains "committed" to net neutrality.[166][167] Individuals who opposed net neutrality rules includeBob Kahn,[168][169]Marc Andreessen,[170]Scott McNealy,[171]Peter ThielandMax Levchin,[163][172]David Farber,[173]David Clark,[174][175]Louis Pouzin,[176]MIT Media Labco-founderNicholas Negroponte,[177]Rajeev Suri,[178]Jeff Pulver,[179][better source needed]Mark Cuban,[180]Robert Pepper[181]and former FCC chairmanAjit Pai. Nobel Prize laureate economistswho opposed net neutrality rules include Princeton economistAngus Deaton, Chicago economistRichard Thaler,MITeconomistBengt Holmström, and the late Chicago economistGary Becker.[182][183]Others include MIT economistsDavid Autor,Amy Finkelstein, andRichard Schmalensee; Stanford economistsRaj Chetty,Darrell Duffie,Caroline Hoxby, andKenneth Judd; Harvard economistAlberto Alesina; Berkeley economistsAlan AuerbachandEmmanuel Saez; and Yale economistsWilliam Nordhaus,Joseph AltonjiandPinelopi Goldberg.[182] Some civil rights groups, such as theNational Urban League,Jesse Jackson'sRainbow/PUSH, andLeague of United Latin American Citizens, also opposed Title II net neutrality regulations,[184]citing concerns over stifling investment in underserved areas.[185][186] TheWikimedia Foundation, which runs Wikipedia, toldThe Washington Postin 2014 that it had a "complicated relationship" with net neutrality.[187]The organization partnered with telecommunications companies to provide free access to Wikipedia for people in developing countries, under a program calledWikipedia Zero, without requiring mobile data to access information. The concept is known aszero rating. Said Wikimedia Foundation officer Gayle Karen Young, "Partnering with telecom companies in the near term, it blurs the net neutrality line in those areas. It fulfills our overall mission, though, which is providing free knowledge."[188] Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz,Christopher Yoo, and Gerald Faulhaber in an op-ed forThe Washington Postcritical of network neutrality, stating that while the Internet is in need of remodeling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement.[189] According to a letter to FCC commissioners and key congressional leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the Internet "means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering...Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don't know that you can recover on your investment, you won't make it."[152][190][191][192]According to theWall Street Journal, in one of Google's few lobbying sessions with FCC officials, the company urged the agency to craft rules that encourage investment in broadband Internet networks—a position that mirrors the argument made by opponents of strong net neutrality rules, such as AT&T and Comcast.[166]Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet.[154]Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form oftiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic.[193]The added income from such services could be used to pay for the building of increased broadband access to more consumers.[103] Opponents say that net neutrality would make it more difficult for ISPs and other network operators to recoup their investments in broadband networks.[194]John Thorne, senior vice president and deputy general counsel ofVerizon, abroadbandand telecommunications company, has argued that they will have no incentive to make large investments to develop advanced fibre-optic networks if they are prohibited from charging higher preferred access fees to companies that wish to take advantage of the expanded capabilities of such networks. Thorne and other ISPs have accused Google andSkypeof freeloading or free riding for using a network of lines and cables the phone company spent billions of dollars to build.[154][195][196]Marc Andreessenstates that "a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you're a large telco right now, you spend on the order of $20billiona year oncapex[capital expenditure]. You need to know how you're going to get areturn on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you're not ever going to get a return on continued network investment – which means you'll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we're getting today."[197] Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure.[198]However, according to Copenhagen Economics, U.S. investment in telecom infrastructure is 50 percent higher than in the European Union. As a share of GDP, the United States' broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably.[199]On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners.[199] The White House reported in June 2013 that U.S. connection speeds are "the fastest compared to other countries with either a similar population or land mass."[200]Akamai's report on "The State of the Internet" in the 2nd quarter of 2014 says "a total of 39 states saw 4K readiness rate more than double over the past year." In other words, as ZDNet reports, those states saw amajorincrease in the availability of the 15 Mbit/s speed needed for 4K video.[201]According to theProgressive Policy Instituteand ITU data, the United States has the most affordable entry-level prices for fixed broadband in the OECD.[199][202] In Indonesia, there is a very high number of Internet connections that are subject to exclusive deals between the ISP and the building owner. Representatives ofGoogle, Incclaim that changing this dynamic could unlock much moreconsumer choicesand higher speeds.[136]Former FCC Commissioner Ajit Pai and Federal Election Commission's Lee Goldman also wrote in a Politico piece in February 2015, "Compare Europe, which has long had utility-style regulations, with the United States, which has embraced a light-touch regulatory model. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband's reach is much wider in the United States, despite its much lower population density."[203] VOIPpioneerJeff Pulverstates that the uncertainty of the FCC imposing Title II, which experts said would create regulatory restrictions on using the Internet to transmit a voice call, was the "single greatest impediment to innovation" for a decade.[204]According to Pulver, investors in the companies he helped found, like Vonage, held back investment because they feared the FCC could use Title II to prevent VOIP startups from bypassing telephone networks.[204] A 2010 paper on net neutrality by Nobel Prize economistGary Beckerand his colleagues stated that "there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation."[164]Becker and fellow economists Dennis Carlton and Hal Sidler found that "Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply."[164]The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers.[199] When FCC chairman Tom Wheeler redefined broadband from 4 Mbit/s to 25 Mbit/s (3.125MB/s) or greater in January 2015, FCC commissioners Ajit Pai and Mike O'Reilly believed the redefinition was to set up the agency's intent to settle the net neutrality fight with new regulations. The commissioners argued that the stricter speed guidelines painted the broadband industry as less competitive, justifying the FCC's moves with Title II net neutrality regulations.[205] A report by theProgressive Policy Institutein June 2014 argues that nearly every American can choose from at least 2–4 broadband Internet service providers, despite claims that there are only a "small number" of broadband providers.[199]Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4Mbit/s(500kbyte/s) downstream and 1 Mbit/s (125 kbyte/s) upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering). Further, three of the four national wireless companies report that they offer 4G LTE to 250–300 million Americans, with the fourth (T-Mobile) sitting at 209 million and counting.[199]Similarly, the FCC reported in June 2008 that 99.8% ofZIP codesin the United States had two or more providers of high-speed Internet lines available, and 94.6% of ZIP codes had four or more providers, as reported by University of Chicago economists Gary Becker, Dennis Carlton, and Hal Sider in a 2010 paper.[164] FCC commissionerAjit Paistates that the FCC completely brushes away the concerns of smaller competitors who are going to be subject to various taxes, such as state property taxes and general receipts taxes.[206]As a result, according to Pai, that does nothing to create more competition within the market.[206]According to Pai, the FCC's ruling to impose Title II regulations is opposed by the country's smallest private competitors and manymunicipal broadbandproviders.[207]In his dissent, Pai noted that 142 wireless ISPs (WISPs) said that FCC's new "regulatory intrusion into our businesses ... would likely force us to raise prices, delay deployment expansion, or both." He also noted that 24 of the country's smallest ISPs, each with fewer than 1,000 residential broadband customers, wrote to the FCC stating that Title II "will badly strain our limited resources" because they "have no in-house attorneys and no budget line items for outside counsel." Further, another 43 municipal broadband providers told the FCC that Title II "will trigger consequences beyond the Commission's control and risk serious harm to our ability to fund and deploy broadband without bringing any concrete benefit for consumers or edge providers that the market is not already proving today without the aid of any additional regulation."[153] According to aWiredmagazine article by TechFreedom's Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: "While popular arguments focus on supposed 'monopolists' such as big cable companies, it's government that's really to blame." The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what requirements an ISP must meet to get approval for access to publicly owned rights of way (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy Internet services—such as AT&T's U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it is not demanded, donating equipment, and delivering free broadband to government buildings.[208] According to a research article fromMIS Quarterly, the authors stated their findings subvert some of the expectations of how ISPs and CPs act regarding net neutrality laws. The paper shows that even if an ISP is under restrictions, it still has the opportunity and the incentive to act as a gatekeeper over CPs by enforcing priority delivery of content.[209] Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field, and that large companies achieve a performance advantage over smaller competitors by providing more and better-quality servers and buying high-bandwidth services. Should scrapping of net neutrality regulations precipitate a price drop for lower levels of access, or access to only certain protocols, for instance, such would make Internet usage more adaptable to the needs of those individuals and corporations who specifically seek differentiated tiers of service. Network expert[210]Richard Bennett has written, "A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality."[211] FCC commissioner Ajit Pai, who opposed the 2015 Title II reclassification of ISPs, says that the ruling allows new fees and taxes on broadband by subjecting them to telephone-style taxes under the Universal Service Fund. Net neutrality proponentFree Presswrites, "the average potential increase in taxes and fees per household would be far less" than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around US$4 billion. Under favorable circumstances, "the increase would be exactly zero."[212]Meanwhile, theProgressive Policy Instituteclaims that Title II could trigger taxes and fees up to $11 billion a year.[213]Financial websiteNerd Walletdid their own assessment and settled on a possible US$6.25 billion tax impact, estimating that the average American household may see their tax bill increase US$67 annually.[213] FCC spokesperson Kim Hart said that the ruling "does not raise taxes or fees. Period."[213] According toPayPalfounder and Facebook investorPeter Thielin 2011, "Net neutrality has not been necessary to date. I don't see any reason why it's suddenly become important, when the Internet has functioned quite well for the past 15 years without it. ... Government attempts to regulate technology have been extraordinarily counterproductive in the past."[163]Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, "The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation."[214] FCC CommissionerAjit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC's ruling on Internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they dislike are non-existent: "The evidence of these continuing threats? There is none; it's all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. "FCC chairman Pai wants to switch ISP rules from proactive restrictions to after-the-fact litigation, which means a lot more leeway for ISPs that don't particularly want to be treated as impartial utilities connecting people to the internet." (Atherton, 2017).[21]Examples this picayune and stale aren't enough to tell a coherent story about net neutrality. The bogeyman never had it so easy."[153]FCC Commissioner Mike O'Reilly, the other opposing commissioner, also claims that the ruling is a solution to a hypothetical problem, "Even after enduring three weeks of spin, it is hard for me to believe that the Commission is establishing an entire Title II/net neutrality regime to protect against hypothetical harms. There is not a shred of evidence that any aspect of this structure is necessary. The D.C. Circuit called the prior, scaled-down version a 'prophylactic' approach. I call it guilt by imagination."[citation needed]In aChicago Tribunearticle, FCC Commissioner Pai andJoshua Wrightof theFederal Trade Commissionargue that "the Internet isn't broken, and we don't need the president's plan to 'fix' it. Quite the opposite. The Internet is an unparalleled success story. It is a free, open and thriving platform."[215] Opponents argue that net neutrality regulations prevent service providers from providing more affordable Internet access to those who can not afford it.[185]A concept known aszero-rating, ISPs would be unable to provide Internet access for free or at a reduced cost to the poor under net neutrality rules.[216][185]For example, low-income users who can not afford bandwidth-hogging Internet services such asvideo streamscould be exempted from paying through subsidies or advertising.[185]However, under the rules, ISPs would not be able to discriminate traffic, thus forcing low-income users to pay for high-bandwidth usage like other users.[216] TheWikimedia Foundation, which runs Wikipedia, createdWikipedia Zeroto provide Wikipedia free-of-charge on mobile phones to low-income users, especially those in developing countries. However, the practice violates net neutrality rules as traffic would have to be treated equally regardless of the users' ability to pay.[185][217]In 2014, Chile banned the practice of Internet service providers giving users free access to websites like Wikipedia and Facebook, saying the practice violates net neutrality rules.[218]In 2016, India banned Free Basics application run byInternet.org, which provides users in less developed countries with free access to a variety of websites like Wikipedia,BBC,Dictionary.com, health sites, Facebook,ESPN, and weather reports—ruling that the initiative violated net neutrality.[219] Net neutrality rules would prevent the traffic from being allocated to the most needed users, according toDavid Farber.[189]Because net neutrality regulations prevent adiscrimination of traffic, networks would have to treat critical traffic equally with non-critical traffic. According to Farber, "When traffic surges beyond the ability of the network to carry it, something is going to be delayed. When choosing what gets delayed, allowing a network to favor traffic from, say, a patient's heart monitor over traffic delivering a music download makes sense. It also makes sense to allow network operators to restrict harmful traffic, such as viruses, worms, and spam."[189] Tim Wu, though a proponent of network neutrality, claims that the current Internet is not neutral as its implementation ofbest effortgenerally favorsfile transferand other non-time-sensitive traffic over real-time communications.[220]Generally, a network which blocks somenodesor services for the customers of the network would normally be expected to be less useful to the customers than one that did not. Therefore, for a network to remain significantly non-neutral requires either that the customers not be concerned about the particular non-neutralities or the customers not have any meaningful choice of providers, otherwise they would presumably switch to another provider with fewer restrictions.[citation needed] While the network neutrality debate continues, network providers often enter into peering arrangements among themselves. These agreements often stipulate how certain information flows should be treated. In addition, network providers often implement various policies such as blocking of port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by decentralized music search applications implementing peer-to-peer networking models. They also present terms of service that often include rules about the use of certain applications as part of their contracts with users.[citation needed]Most consumer Internet providers implement policies like these. The MIT Mantid Port Blocking Measurement Project is a measurement effort to characterize Internet port blocking and potentially discriminatory practices. However, the effect of peering arrangements among network providers are only local to the peers that enter into the arrangements and cannot affect traffic flow outside their scope.[citation needed] Jon PehafromCarnegie Mellon Universitybelieves it is important to create policies that protect users from harmful traffic discrimination while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and the potential impacts of regulation.[221]Google chairmanEric Schmidtaligns Google's views on data discrimination with Verizon's: "I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don't discriminate against one person's video in favor of another. But it's okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement withVerizonand Google on that issue."[165]Echoing similar comments by Schmidt, Google's Chief Internet Evangelist and "father of the Internet",Vint Cerf, says that "it's entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don't really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications."[222] Content cachingis the process by which frequently accessed contents are temporarily stored in strategic network positions (e.g., in servers close to the end-users[223]) to achieve several performance objectives. For example, caching is commonly used by ISPs to reducenetwork congestionand results in a superiorquality of experience(QoE) perceived by the final users. Since the storage available in cache servers is limited, caching involves a process of selecting the contents worth storing. Severalcache algorithmshave been designed to perform this process which, in general, leads to storing the most popular contents. The cached contents are retrieved at a higher QoE (e.g., lower latency), and caching can be therefore considered a form of traffic differentiation.[221]However, caching is not generally viewed as a form of discriminatory traffic differentiation. For example, the technical writer Adam Marcus states that "accessing content from edge servers may be a bit faster for users, but nobody is being discriminated against and most content on the Internet is not latency-sensitive".[223]In line with this statement, caching is not regulated by legal frameworks that are favourable to Net Neutrality, such as the Open Internet Order issued by theFCCin 2015. Even more so, the legitimacy of caching has never been put in doubt by opponents of Net Neutrality. On the contrary, the complexity of caching operations (e.g., extensive information processing) has been successively regarded by the FCC as one of the technical reasons why ISPs should not be considered common carriers, which legitimates the abrogation of Net Neutrality rules.[224]Under a Net Neutrality regime, prioritization of a class of traffic with respect to another one is allowed only if several requirements are met (e.g., objectively different QoS requirements).[225]However, when it comes to caching, a selection of contents of the same class has to be performed (e.g., set of videos worth storing in cache servers). In the spirit of general deregulation with regard to caching, there is no rule that specifies how this process can be carried out in a non-discriminatory way. Nevertheless, the scientific literature considers the issue of caching as a potentially discriminatory process and provides possible guidelines to address it.[226]For example, a non-discriminatory caching might be performed considering the popularity of contents, or with the aim of guaranteeing the same QoE to all the users, or, alternatively, to achieve some common welfare objectives.[226] As far ascontent delivery networks(CDNs) are concerned, the relationship between caching and Net Neutrality is even more complex. In fact, CDNs are employed to allow scalable and highly-efficient content delivery rather than to grant access to the Internet. Consequently, differently from ISPs, CDNs are entitled to charge content providers for caching their content. Therefore, although this may be regarded as a form of paid traffic prioritization, CDNs are not subject to Net Neutrality regulations and are rarely included in the debate. Despite this, it is argued by some that the Internet ecosystem has changed to such an extent that all the players involved in the content delivery can distort competition and should be therefore also included in the discussion around Net Neutrality.[226]Among those, the analyst Dan Rayburn suggested that "the Open Internet Order enacted by the FCC in 2015 was myopically focussed on ISPs".[227] Internet routers forward packets according to the different peering and transport agreements that exist between network operators. Many internets using Internet protocols now employ quality of service (QoS), and Network Service Providers frequently enter into Service Level Agreements with each other embracing some sort of QoS. There is no single, uniform method of interconnecting networks usingIP, and not all networks that use IP are part of the Internet.IPTVnetworks are isolated from the Internet and are therefore not covered by network neutrality agreements. The IPdatagramincludes a 3-bit wide Precedence field and a largerDiffServCode Point (DSCP) that are used to request a level of service, consistent with the notion that protocols in a layered architecture offer service throughService Access Points. This field is sometimes ignored, especially if it requests a level of service outside the originating network's contract with the receiving network. It is commonly used in private networks, especially those includingWi-Finetworks where priority is enforced. While there are several ways of communicating service levels across Internet connections, such asSIP,RSVP,IEEE 802.11e, andMPLS, the most common scheme combines SIP and DSCP. Router manufacturers now sell routers that have logic enabling them to route traffic for various Classes of Service atwire-speed.[citation needed] Quality of service is sometimes taken as a measurement through certain tools to test a user's connection quality, such as Network Diagnostic Tools (NDT) and services on speedtest.net. These tools are known to be used byNational Regulatory Authorities (NRAs), who use these QoS measurements as a way of detecting Net Neutrality violations. However, there are very few examples of such measurements being used in any significant way by NRAs, or in network policy for that matter. Often, these tools are used not because they fail at recording the results they are meant to record, but because said measurements are inflexible and difficult to exploit for any significant purpose. According to Ioannis Koukoutsidis, the problems with the current tools used to measure QoS stem from a lack of a standard detection methodology, a need to be able to detect various methods in which an ISP might violate Net Neutrality, and the inability to test an average measurement for a specific population of users.[228] With the emergence of multimedia,VoIP, IPTV, and other applications that benefit from low latency, various attempts to address the inability of some private networks to limit latency have arisen, including the proposition of offeringtiered servicelevels that would shape Internet transmissions at the network layer based on application type. These efforts are ongoing and are starting to yield results as wholesale Internet transport providers begin to amend service agreements to include service levels.[229] Advocates of net neutrality have proposed several methods to implement a net-neutral Internet that includes a notion of quality-of-service: There are also some discrepancies in how wireless networks affect the implementation of net neutrality policy, some of which are noted in the studies ofChristopher Yoo. In one research article, he claimed that "...bad handoffs, local congestion, and the physics ofwave propagationmake wireless broadband networks significantly less reliable than fixed broadband networks."[232] Broadband Internet access has most often been sold to users based onExcess Information Rateor maximum available bandwidth. If ISPs can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or "leverageprice discriminationto recoup costs of 'consumer surplus'"). However, purchasers of connectivity on the basis ofCommitted Information Rateor guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. Various studies have sought to provide network providers with the necessary formulas for adequately pricing such atiered servicefor their customer base. But while network neutrality is primarily focused on protocol-based provisioning, most of the pricing models are based on bandwidth restrictions.[233] Many Economists have analyzed Net Neutrality to compare various hypothetical pricing models. For instance, economic professors Michael L. Katz and Benjamin E. Hermalin at the University of California Berkeley co-published a paper titled, "The Economics of Product-Line Restrictions with an Application to the Network Neutrality Debate" in 2007. In this paper, they compared the single-service economic equilibrium to the multi-service economic equilibriums under Net Neutrality.[234] On 12 July 2017, an event called theDay of Actionwas held to advocatenet neutrality in the United Statesin response to Ajit Pai's plans to remove government policies that upheld net neutrality. Several websites participated in this event, including ones such asAmazon,Netflix, Google, and several other just as well-known websites. The gathering was called "the largest online protest in history." Websites chose many different ways to convey their message. The founder ofthe web,Tim Berners-Lee, published a video defending FCC's rules.Redditmade a pop-up message that loads slowly to illustrate the effect of removing net neutrality. Other websites also put up some less obvious notifications, such as Amazon, which put up a hard-to-notice link, or Google, which put up a policy blog post as opposed to a more obvious message.[235] A poll conducted byMozillashowed strong support for net neutrality acrossUS political parties. Out of the approximately 1,000 responses received by the poll, 76% of Americans, 81% of Democrats, and 73% of Republicans, support net neutrality.[236]The poll also showed that 78% of Americans do not think that Trump's government can be trusted to protect access to the Internet. Net neutrality supporters had also made several comments on the FCC website opposing plans to remove net neutrality, especially aftera segmentbyJohn Oliverregarding this topic was aired on his showLast Week Tonight.[237]He urged his viewers to comment on the FCC's website, and the flood of comments that were received crashed the FCC's website, with the resulting media coverage of the incident inadvertently helping it to reach greater audiences.[238]However, in response, Ajit Pai selected one particular comment that specifically supported removal of net neutrality policies. At the end of August, the FCC released more than 13,000 pages of net neutrality complaints filed by consumers, one day before the deadline for the public to comment on Ajit Pai's proposal to remove net neutrality. It has been implied that the FCC ignored evidence against their proposal to remove the protection laws faster. It has also been noted that nowhere was it mentioned how FCC made any attempt to resolve the complaints made. Regardless, Ajit Pai's proposal has drawn more than 22 million comments, though a large amount was spam. However, there were 1.5 million personalized comments, 98.5% of them protesting Ajit Pai's plan.[239] As of January 2018[update],[needs update]fifty senators had endorsed a legislative measure to override the Federal Communications Commission's decision to deregulate the broadband industry.The Congressional Review Actpaperwork was filed on 9 May 2018, which allowed theSenateto vote on the permanence of the new net neutrality rules proposed by the Federal Communications Commission.[240]The vote passed and a resolution was approved to try to remove the FCC's new rules on net neutrality; however, officials doubted there was enough time to completely repeal the rules before theOpen Internet Orderofficially expired on 11 June 2018.[241]A September 2018 report from Northeastern University and the University of Massachusetts, Amherst found that U.S. telecom companies are indeed slowing Internet traffic to and from those two sites in particular along with other popular apps.[242]In March 2019, congressional supporters of net neutrality introduced the Save the Internet Act in both the House and Senate, which if passed would reverse the FCC's 2017 repeal of net neutrality protections.[243] Adigital divideis referred to as the difference between those who have access to the internet and those using digital technologies based on urban against rural areas.[244]In the U.S, government city tech leaders warned in 2017 that the FCC's repeal of net neutrality will widen the digital divide, negatively affect small businesses, and job opportunities for middle class and low-income citizens. The FCC reports on their website that Americans in rural areas reach only 65 percent, while in urban areas reach 97 percent of access to high-speed Internet.[245][246]Public Knowledge has stated that this will have a larger impact on those living in rural areas without internet access.[247]In developing countries like India that don't have reliable electricity or internet connections has only 9 percent of those living in rural areas that have internet access compared to 64 percent of those in urban areas that have access.[248]
https://en.wikipedia.org/wiki/Network_neutrality
TheSempronis a name used for AMD's low-end CPUs, replacing theDuronprocessor. The name was introduced in 2004, and processors with this name continued to be available for the FM2/FM2+ socket in 2015. SMS3100BQX3LF
https://en.wikipedia.org/wiki/List_of_AMD_Sempron_processors
Minimax(sometimesMinmax,MM[1]orsaddle point[2]) is a decision rule used inartificial intelligence,decision theory,game theory,statistics, andphilosophyforminimizingthe possiblelossfor aworst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-playerzero-sumgame theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty. Themaximin valueis the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is:[3] Where: Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives playerithe smallest value. Then, we determine which action playerican take in order to make sure that this smallest value is the highest possible. For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelledT,M, orB, and the second player ("column player") may choose either of two moves,LorR. The result of the combination of both moves is expressed in a payoff table: (where the first number in each of the cell is the pay-out of the row player and the second number is the pay-out of the column player). For the sake of example, we consider onlypure strategies. Check each player in turn: If both players play their respective maximin strategies(T,L){\displaystyle (T,L)}, the payoff vector is(3,1){\displaystyle (3,1)}. Theminimax valueof a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when theyknowthe actions of the other players. Its formal definition is:[3] The definition is very similar to that of the maximin value – only the order of the maximum and minimum operators is inverse. In the above example: For every playeri, the maximin is at most the minimax: Intuitively, in maximin the maximization comes after the minimization, so playeritries to maximize their value before knowing what the others will do; in minimax the maximization comes before the minimization, so playeriis in a much better position – they maximize their value knowing what the others did. Another way to understand thenotationis by reading from right to left: When we write the initial set of outcomesvi(ai,a−i){\displaystyle \ v_{i}(a_{i},a_{-i})\ }depends on bothai{\displaystyle \ {a_{i}}\ }anda−i.{\displaystyle \ {a_{-i}}\ .}We firstmarginalize awayai{\displaystyle {a_{i}}}fromvi(ai,a−i){\displaystyle v_{i}(a_{i},a_{-i})}, by maximizing overai{\displaystyle \ {a_{i}}\ }(for every possible value ofa−i{\displaystyle {a_{-i}}}) to yield a set of marginal outcomesvi′(a−i),{\displaystyle \ v'_{i}(a_{-i})\,,}which depends only ona−i.{\displaystyle \ {a_{-i}}\ .}We then minimize overa−i{\displaystyle \ {a_{-i}}\ }over these outcomes. (Conversely for maximin.) Although it is always the case thatvrow_≤vrow¯{\displaystyle \ {\underline {v_{row}}}\leq {\overline {v_{row}}}\ }andvcol_≤vcol¯,{\displaystyle \ {\underline {v_{col}}}\leq {\overline {v_{col}}}\,,}the payoff vector resulting from both players playing their minimax strategies,(2,−20){\displaystyle \ (2,-20)\ }in the case of(T,R){\displaystyle \ (T,R)\ }or(−10,1){\displaystyle (-10,1)}in the case of(M,R),{\displaystyle \ (M,R)\,,}cannot similarly be ranked against the payoff vector(3,1){\displaystyle \ (3,1)\ }resulting from both players playing their maximin strategy. In two-playerzero-sum games, the minimax solution is the same as theNash equilibrium. In the context of zero-sum games, theminimax theoremis equivalent to:[4][failed verification] For every two-personzero-sumgame with finitely many strategies, there exists a valueVand a mixed strategy for each player, such that Equivalently, Player 1's strategy guarantees them a payoff ofVregardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The nameminimaxarises because each player minimizes the maximum payoff possible for the other – since the game is zero-sum, they also minimize their own maximum loss (i.e., maximize their minimum payoff). See alsoexample of a game without a value. The following example of a zero-sum game, whereAandBmake simultaneous moves, illustratesmaximinsolutions. Suppose each player has three choices and consider thepayoff matrixforAdisplayed on the table ("Payoff matrix for player A"). Assume the payoff matrix forBis the same matrix with the signs reversed (i.e., if the choices are A1 and B1 thenBpays 3 toA). Then, the maximin choice forAis A2 since the worst possible result is then having to pay 1, while the simple maximin choice forBis B2 since the worst possible result is then no payment. However, this solution is not stable, since ifBbelievesAwill choose A2 thenBwill choose B1 to gain 1; then ifAbelievesBwill choose B1 thenAwill choose A1 to gain 3; and thenBwill choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed. Some choices aredominatedby others and can be eliminated:Awill not choose A3 since either A1 or A2 will produce a better result, no matter whatBchooses;Bwill not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter whatAchooses. PlayerAcan avoid having to make an expected payment of more than⁠1/3⁠by choosing A1 with probability⁠1/6⁠and A2 with probability⁠5/6⁠:The expected payoff forAwould be3 ×⁠1/6⁠− 1 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B1 and−2 ×⁠1/6⁠+ 0 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B2. Similarly,Bcan ensure an expected gain of at least⁠1/3⁠, no matter whatAchooses, by using a randomized strategy of choosing B1 with probability⁠1/3⁠and B2 with probability⁠2/3⁠. Thesemixedminimax strategies cannot be improved and are now stable. Frequently, in game theory,maximinis distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In azero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain. "Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as theNash equilibriumstrategy. The minimax values are very important in the theory ofrepeated games. One of the central theorems in this theory, thefolk theorem, relies on the minimax values. Incombinatorial game theory, there is a minimax algorithm for game solutions. Asimpleversion of the minimaxalgorithm, stated below, deals with games such astic-tac-toe, where each player can win, lose, or draw. If player Acanwin in one move, their best move is that winning move. If player B knows that one move will lead to the situation where player Acanwin in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw. Late in the game, it's easy to see what the "best" move is. The minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying tomaximizethe chances of A winning, while on the next turn player B is trying tominimizethe chances of A winning (i.e., to maximize B's own chances of winning). Aminimax algorithm[5]is a recursivealgorithmfor choosing the next move in an n-playergame, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of aposition evaluation functionand it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it isA's turn to move,Agives a value to each of their legal moves. A possible allocation method consists in assigning a certain win forAas +1 and forBas −1. This leads tocombinatorial game theoryas developed byJohn H. Conway. An alternative is using a rule that if the result of a move is an immediate win forA, it is assigned positive infinity and if it is an immediate win forB, negative infinity. The value toAof any other move is the maximum of the values resulting from each ofB's possible replies. For this reason,Ais called themaximizing playerandBis called theminimizing player, hence the nameminimax algorithm. The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such aschessorgo, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another. This can be extended if we can supply aheuristicevaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computerDeep Blue(the first one to beat a reigning world champion,Garry Kasparovat that time) looked ahead at least 12 plies, then applied a heuristic evaluation function.[6] The algorithm can be thought of as exploring thenodesof agame tree. Theeffectivebranching factorof the tree is the average number ofchildrenof each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usuallyincreases exponentiallywith the number of plies (it is less than exponential if evaluatingforced movesor repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is thereforeimpracticalto completely analyze games such as chess using the minimax algorithm. The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use ofalpha–beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the unpruned search. A naïve minimax algorithm may be trivially modified to additionally return an entirePrincipal Variationalong with a minimax score. Thepseudocodefor the depth-limited minimax algorithm is given below. The minimax function returns a heuristic value forleaf nodes(terminal nodes and nodes at the maximum search depth). Non-leaf nodes inherit their value from a descendant leaf node. The heuristic value is a score measuring the favorability of the node for the maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player. The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node. The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result. Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation thatmax(a,b)=−min(−a,−b),{\displaystyle \ \max(a,b)=-\min(-a,-b)\ ,}minimax may often be simplified into thenegamaxalgorithm. Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates thetreeon the right, where the circles represent the moves of the player running the algorithm (maximizing player), and squares represent the moves of the opponent (minimizing player). Because of the limitation of computation resources, as explained above, the tree is limited to alook-aheadof 4 moves. The algorithm evaluates eachleaf nodeusing a heuristic evaluation function, obtaining the values shown. The moves where themaximizing playerwins are assigned with positive infinity, while the moves that lead to a win of theminimizing playerare assigned with negative infinity. At level 3, the algorithm will choose, for each node, thesmallestof thechild nodevalues, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node thelargestof thechild nodevalues. Once again, the values are assigned to eachparent node. The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches theroot node, where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order tominimizethemaximumpossibleloss. Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost, which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game againstnature(seemove by nature), and using a similar mindset asMurphy's laworresistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games. In addition,expectiminimax treeshave been developed, for two-player games in which chance (for example, dice) is a factor. In classical statisticaldecision theory, we have anestimatorδ{\displaystyle \ \delta \ }that is used to estimate aparameterθ∈Θ.{\displaystyle \ \theta \in \Theta \ .}We also assume arisk functionR(θ,δ).{\displaystyle \ R(\theta ,\delta )\ .}usually specified as the integral of aloss function. In this framework,δ~{\displaystyle \ {\tilde {\delta }}\ }is calledminimaxif it satisfies An alternative criterion in the decision theoretic framework is theBayes estimatorin the presence of aprior distributionΠ.{\displaystyle \Pi \ .}An estimator is Bayes if it minimizes theaveragerisk A key feature of minimax decision making is being non-probabilistic: in contrast to decisions usingexpected valueorexpected utility, it makes no assumptions about the probabilities of various outcomes, justscenario analysisof what the possible outcomes are. It is thusrobustto changes in the assumptions, in contrast to these other decision techniques. Various extensions of this non-probabilistic approach exist, notablyminimax regretandInfo-gap decision theory. Further, minimax only requiresordinal measurement(that outcomes be compared and ranked), notintervalmeasurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "This strategy yieldsℰ(X) =n."Minimax thus can be used on ordinal data, and can be more transparent. The concept of "lesser evil" voting (LEV) can be seen as a form of theminimaxstrategy where voters, when faced with two or more candidates, choose the one they perceive as the least harmful or the "lesser evil." To do so, "voting should not be viewed as a form of personal self-expression or moral judgement directed in retaliation towards major party candidates who fail to reflect our values, or of a corrupt system designed to limit choices to those acceptable to corporate elites," but rather as an opportunity to reduce harm or loss.[7] In philosophy, the term "maximin" is often used in the context ofJohn Rawls'sA Theory of Justice,where he refers to it in the context of TheDifference Principle.[8]Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society".[9][10]
https://en.wikipedia.org/wiki/Minimax
DOACROSSparallelismis aparallelizationtechnique used to performLoop-level parallelismby utilizingsynchronisationprimitives between statements in a loop. This technique is used when a loop cannot be fully parallelized byDOALL parallelismdue to data dependencies between loop iterations, typically loop-carried dependencies. The sections of the loop which contain loop-carried dependence are synchronized, while treating each section as a parallel task on its own. Therefore, DOACROSS parallelism can be used to complement DOALL parallelism to reduce loop execution times. DOACROSS parallelism is particularly useful when one statement depends on the values generated by another statement. In such a loop, DOALL parallelism can not be implemented in a straightforward manner. If the first statement blocks the execution of the second statement until the required value has been produced, then the two statements would be able to execute independent of each other (i.e.), each of the aforementioned statements would be parallelized for simultaneous execution[1]using DOALL parallelism. The followingpseudocodeillustrates the operation of DOACROSS parallelism in such a situation.[2] In this example, each iteration of the loop requires the value written into a by the previous iteration. However, the entire statement is not dependent on the previous iteration, but only a portion of it. The statement is split into two blocks to illustrate this. The first statement has no loop carried dependence now, and the result of this statement is stored in the variable temp. The post () command is used to signal that the required result has been produced for utilization by other threads. The wait (i-2) command waits for the value a[i-2] before unblocking. The execution time of DOACROSS parallelism largely depends on what fraction of the program suffers from loop-carried dependence. Larger gains are observed when a sizable portion of the loop is affected by loop-carried dependence.[2] DOACROSS parallelism suffers from significant space and granularity overheads due to the synchronization primitives used. Modern day compilers often overlook this method because of this major disadvantage.[1]The overheads may be reduced by reducing the frequency of synchronization across the loop, by applying the primitives for groups of statements at a time.[2]
https://en.wikipedia.org/wiki/DOACROSS_parallelism
The Java programming language'sJava Collections Frameworkversion 1.5 and later defines and implements the original regular single-threaded Maps, and also new thread-safe Maps implementing thejava.util.concurrent.ConcurrentMapinterface among other concurrent interfaces.[1]In Java 1.6, thejava.util.NavigableMapinterface was added, extendingjava.util.SortedMap, and thejava.util.concurrent.ConcurrentNavigableMapinterface was added as a subinterface combination. The version 1.8 Map interface diagram has the shape below. Sets can be considered sub-cases of corresponding Maps in which the values are always a particular constant which can be ignored, although the Set API uses corresponding but differently named methods. At the bottom is thejava.util.concurrent.ConcurrentNavigableMap, which is a multiple-inheritance. For unordered access as defined in thejava.util.Mapinterface, thejava.util.concurrent.ConcurrentHashMapimplementsjava.util.concurrent.ConcurrentMap.[2]The mechanism is a hash access to a hash table with lists of entries, each entry holding a key, a value, the hash, and a next reference. Previous to Java 8, there were multiple locks each serializing access to a 'segment' of the table. In Java 8, native synchronization is used on the heads of the lists themselves, and the lists can mutate into small trees when they threaten to grow too large due to unfortunate hash collisions. Also, Java 8 uses the compare-and-set primitive optimistically to place the initial heads in the table, which is very fast. Performance isO(n), but there are delays occasionally when rehashing is necessary. After the hash table expands, it never shrinks, possibly leading to a memory 'leak' after entries are removed. For ordered access as defined by thejava.util.NavigableMapinterface,java.util.concurrent.ConcurrentSkipListMapwas added in Java 1.6,[1]and implementsjava.util.concurrent.ConcurrentMapand alsojava.util.concurrent.ConcurrentNavigableMap. It is aSkip listwhich uses Lock-free techniques to make a tree. Performance isO(log(n)). One problem solved by the Java 1.5java.util.concurrentpackage is that of concurrent modification. The collection classes it provides may be reliably used by multiple Threads. All Thread-shared non-concurrent Maps and other collections need to use some form of explicit locking such as native synchronization in order to prevent concurrent modification, or else there must be a way to prove from the program logic that concurrent modification cannot occur. Concurrent modification of aMapby multiple Threads will sometimes destroy the internal consistency of the data structures inside theMap, leading to bugs which manifest rarely or unpredictably, and which are difficult to detect and fix. Also, concurrent modification by one Thread with read access by another Thread or Threads will sometimes give unpredictable results to the reader, although the Map's internal consistency will not be destroyed. Using external program logic to prevent concurrent modification increases code complexity and creates an unpredictable risk of errors in existing and future code, although it enables non-concurrent Collections to be used. However, either locks or program logic cannot coordinate external threads which may come in contact with theCollection. In order to help with the concurrent modification problem, the non-concurrentMapimplementations and otherCollections use internal modification counters which are consulted before and after a read to watch for changes: the writers increment the modification counters. A concurrent modification is supposed to be detected by this mechanism, throwing ajava.util.ConcurrentModificationException,[3]but it is not guaranteed to occur in all cases and should not be relied on. The counter maintenance is also a performance reducer. For performance reasons, the counters are not volatile, so it is not guaranteed that changes to them will be propagated betweenThreads. One solution to the concurrent modification problem is using a particular wrapper class provided by a factory injava.util.Collections:public static<K,V> Map<K,V> synchronizedMap(Map<K,V> m)which wraps an existing non-thread-safeMapwith methods that synchronize on an internal mutex.[4]There are also wrappers for the other kinds of Collections. This is a partial solution, because it is still possible that the underlyingMapcan be inadvertently accessed byThreads which keep or obtain unwrapped references. Also, all Collections implement thejava.lang.Iterablebut the synchronized-wrapped Maps and other wrappedCollectionsdo not provide synchronized iterators, so the synchronization is left to the client code, which is slow and error prone and not possible to expect to be duplicated by other consumers of the synchronizedMap. The entire duration of the iteration must be protected as well. Furthermore, aMapwhich is wrapped twice in different places will have different internal mutex Objects on which the synchronizations operate, allowing overlap. The delegation is a performance reducer, but modern Just-in-Time compilers often inline heavily, limiting the performance reduction. Here is how the wrapping works inside the wrapper - the mutex is just a finalObjectand m is the final wrappedMap: The synchronization of the iteration is recommended as follows; however, this synchronizes on the wrapper rather than on the internal mutex, allowing overlap:[5] AnyMapcan be used safely in a multi-threaded system by ensuring that all accesses to it are handled by the Java synchronization mechanism: The code using ajava.util.concurrent.ReentrantReadWriteLockis similar to that for native synchronization. However, for safety, the locks should be used in a try/finally block so that early exit such asjava.lang.Exceptionthrowing or break/continue will be sure to pass through the unlock. This technique is better than using synchronization[6]because reads can overlap each other, there is a new issue in deciding how to prioritize the writes with respect to the reads. For simplicity ajava.util.concurrent.ReentrantLockcan be used instead, which makes no read/write distinction. More operations on the locks are possible than with synchronization, such astryLock()andtryLock(long timeout, TimeUnit unit). Mutual exclusion has alock convoyproblem, in which threads may pile up on a lock, causing the JVM to need to maintain expensive queues of waiters and to 'park' the waitingThreads. It is expensive to park and unpark aThreads, and a slow context switch may occur. Context switches require from microseconds to milliseconds, while the Map's own basic operations normally take nanoseconds. Performance can drop to a small fraction of a singleThread's throughput as contention increases. When there is no or little contention for the lock, there is little performance impact; however, except for the lock's contention test. Modern JVMs will inline most of the lock code, reducing it to only a few instructions, keeping the no-contention case very fast. Reentrant techniques like native synchronization orjava.util.concurrent.ReentrantReadWriteLockhowever have extra performance-reducing baggage in the maintenance of the reentrancy depth, affecting the no-contention case as well. The Convoy problem seems to be easing with modern JVMs, but it can be hidden by slow context switching: in this case, latency will increase, but throughput will continue to be acceptable. With hundreds ofThreads , a context switch time of 10ms produces a latency in seconds. Mutual exclusion solutions fail to take advantage of all of the computing power of a multiple-core system, because only oneThreadis allowed inside theMapcode at a time. The implementations of the particular concurrent Maps provided by the Java Collections Framework and others sometimes take advantage of multiple cores usinglock freeprogramming techniques. Lock-free techniques use operations like the compareAndSet() intrinsic method available on many of the Java classes such asAtomicReferenceto do conditional updates of some Map-internal structures atomically. The compareAndSet() primitive is augmented in the JCF classes by native code that can do compareAndSet on special internal parts of some Objects for some algorithms (using 'unsafe' access). The techniques are complex, relying often on the rules of inter-thread communication provided by volatile variables, the happens-before relation, special kinds of lock-free 'retry loops' (which are not like spin locks in that they always produce progress). The compareAndSet() relies on special processor-specific instructions. It is possible for any Java code to use for other purposes the compareAndSet() method on various concurrent classes to achieve Lock-free or even Wait-free concurrency, which provides finite latency. Lock-free techniques are simple in many common cases and with some simple collections like stacks. The diagram indicates how synchronizing usingCollections.synchronizedMap(java.util.Map)wrapping a regular HashMap (purple) may not scale as well as ConcurrentHashMap (red). The others are the ordered ConcurrentNavigableMaps AirConcurrentMap (blue) and ConcurrentSkipListMap (CSLM green). (The flat spots may be rehashes producing tables that are bigger than the Nursery, and ConcurrentHashMap takes more space. Note y axis should say 'puts K'. System is 8-core i7 2.5 GHz, with -Xms5000m to prevent GC). GC and JVM process expansion change the curves considerably, and some internal lock-Free techniques generate garbage on contention. Yet another problem with mutual exclusion approaches is that the assumption of complete atomicity made by some single-threaded code creates sporadic unacceptably long inter-Thread delays in a concurrent environment. In particular, Iterators and bulk operations like putAll() and others can take a length of time proportional to the Map size, delaying otherThreads that expect predictably low latency for non-bulk operations. For example, a multi-threaded web server cannot allow some responses to be delayed by long-running iterations of other threads executing other requests that are searching for a particular value. Related to this is the fact thatThreads that lock theMapdo not actually have any requirement ever to relinquish the lock, and an infinite loop in the ownerThreadmay propagate permanent blocking to otherThreads . Slow ownerThreads can sometimes be Interrupted. Hash-based Maps also are subject to spontaneous delays during rehashing. Thejava.util.concurrentpackages' solution to the concurrent modification problem, the convoy problem, the predictable latency problem, and the multi-core problem includes an architectural choice called weak consistency. This choice means that reads likeget(java.lang.Object)will not block even when updates are in progress, and it is allowable even for updates to overlap with themselves and with reads. Weak consistency allows, for example, the contents of aConcurrentMapto change during an iteration of it by a singleThread.[7]The Iterators are designed to be used by oneThreadat a time. So, for example, aMapcontaining two entries that are inter-dependent may be seen in an inconsistent way by a readerThreadduring modification by anotherThread. An update that is supposed to change the key of an Entry (k1,v) to an Entry (k2,v) atomically would need to do a remove(k1) and then a put(k2, v), while an iteration might miss the entry or see it in two places. Retrievals return the value for a given key that reflectsthe latest previous completedupdate for that key. Thus there is a 'happens-before' relation. There is no way forConcurrentMaps to lock the entire table. There is no possibility ofConcurrentModificationExceptionas there is with inadvertent concurrent modification of non-concurrentMaps. Thesize()method may take a long time, as opposed to the corresponding non-concurrentMaps and other collections which usually include a size field for fast access, because they may need to scan the entireMapin some way. When concurrent modifications are occurring, the results reflect the state of theMapat some time, but not necessarily a single consistent state, hencesize(),isEmpty()andcontainsValue(java.lang.Object)may be best used only for monitoring. There are some operations provided byConcurrentMapthat are not inMap- which it extends - to allow atomicity of modifications. The replace(K, v1, v2) will test for the existence ofv1in the Entry identified byKand only if found, then thev1is replaced byv2atomically. The new replace(k,v) will do a put(k,v) only ifkis already in the Map. Also, putIfAbsent(k,v) will do a put(k,v) only ifkis not already in theMap, and remove(k, v) will remove the Entry for v only if v is present. This atomicity can be important for some multi-threaded use cases, but is not related to the weak-consistency constraint. ForConcurrentMaps, the following are atomic. m.putIfAbsent(k, v) is atomic but equivalent to: m.replace(k, v) is atomic but equivalent to: m.replace(k, v1, v2) is atomic but equivalent to: m.remove(k, v) is atomic but equivalent to: BecauseMapandConcurrentMapare interfaces, new methods cannot be added to them without breaking implementations. However, Java 1.8 added the capability for default interface implementations and it added to theMapinterface default implementations of some new methods getOrDefault(Object, V), forEach(BiConsumer), replaceAll(BiFunction), computeIfAbsent(K, Function), computeIfPresent(K, BiFunction), compute(K,BiFunction), and merge(K, V, BiFunction). The default implementations inMapdo not guarantee atomicity, but in theConcurrentMapoverriding defaults these useLock freetechniques to achieve atomicity, and existing ConcurrentMap implementations will automatically be atomic. The lock-free techniques may be slower than overrides in the concrete classes, so concrete classes may choose to implement them atomically or not and document the concurrency properties. It is possible to uselock-freetechniques with ConcurrentMaps because they include methods of asufficiently high consensus number, namely infinity, meaning that any number ofThreads may be coordinated. This example could be implemented with the Java 8 merge() but it shows the overall lock-free pattern, which is more general. This example is not related to the internals of the ConcurrentMap but to the client code's use of the ConcurrentMap. For example, if we want to multiply a value in the Map by a constant C atomically: The putIfAbsent(k, v) is also useful when the entry for the key is allowed to be absent. This example could be implemented with the Java 8 compute() but it shows the overall lock-free pattern, which is more general. The replace(k,v1,v2) does not accept null parameters, so sometimes a combination of them is necessary. In other words, ifv1is null, then putIfAbsent(k, v2) is invoked, otherwise replace(k,v1,v2) is invoked. The Java collections framework was designed and developed primarily byJoshua Bloch, and was introduced inJDK 1.2.[8]The original concurrency classes came fromDoug Lea's[9]collection package.
https://en.wikipedia.org/wiki/Java_ConcurrentMap
Atest planis a document detailing the objectives, resources, and processes for a specific test session for asoftwareor hardware product. The plan typically contains a detailed understanding of the eventualworkflow. A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input fromtest engineers.[1] Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following: A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components. Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formaltest strategy.[2] Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap but will not necessarily be exactly the same for all stages. For example, some requirements may be verified duringdesign verification test, but not repeated during acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access. Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately. Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also include what data will be collected and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties. IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is anIEEEstandard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document.[3]These stages are: The IEEE documents that suggest what should be contained in a test plan are:
https://en.wikipedia.org/wiki/Test_plan
Incryptography,forward secrecy(FS), also known asperfect forward secrecy(PFS), is a feature of specifickey-agreement protocolsthat gives assurances thatsession keyswill not be compromised even if long-term secrets used in the session key exchange are compromised, limiting damage.[1][2][3]ForTLS, the long-term secret is typically theprivate keyof the server. Forward secrecy protects past sessions against future compromises of keys or passwords. By generating a unique session key for every session a user initiates, the compromise of a single session key will not affect any data other than that exchanged in the specific session protected by that particular key. This by itself is not sufficient for forward secrecy which additionally requires that a long-term secret compromise does not affect the security of past session keys. Forward secrecy protects data on thetransport layerof a network that uses common transport layer security protocols, includingOpenSSL,[4]when its long-term secret keys are compromised, as with theHeartbleedsecurity bug. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered, for example via aman-in-the-middle (MITM) attack. The value of forward secrecy is that it protects past communication. This reduces the motivation for attackers to compromise keys. For instance, if an attacker learns a long-term key, but the compromise is detected and the long-term key is revoked and updated, relatively little information is leaked in a forward secure system. The value of forward secrecy depends on the assumed capabilities of an adversary. Forward secrecy has value if an adversary is assumed to be able to obtain secret keys from a device (read access) but is either detected or unable to modify the way session keys are generated in the device (full compromise). In some cases an adversary who can read long-term keys from a device may also be able to modify the functioning of the session key generator, as in the backdooredDual Elliptic Curve Deterministic Random Bit Generator. If an adversary can make the random number generator predictable, then past traffic will be protected but all future traffic will be compromised. The value of forward secrecy is limited not only by the assumption that an adversary will attack a server by only stealing keys and not modifying the random number generator used by the server but it is also limited by the assumption that the adversary will only passively collect traffic on the communications link and not be active using a man-in-the-middle attack. Forward secrecy typically uses an ephemeralDiffie–Hellman key exchangeto prevent reading past traffic. The ephemeral Diffie–Hellman key exchange is often signed by the server using a static signing key. If an adversary can steal (or obtain through a court order) this static (long term) signing key, the adversary can masquerade as the server to the client and as the client to the server and implement a classic man-in-the-middle attack.[5] The term "perfect forward secrecy" was coined by C. G. Günther in 1990[6]and further discussed byWhitfield Diffie,Paul van Oorschot, and Michael James Wiener in 1992,[7]where it was used to describe a property of the Station-to-Station protocol.[8] Forward secrecy has also been used to describe the analogous property ofpassword-authenticated key agreementprotocols where the long-term secret is a (shared)password.[9] In 2000 theIEEEfirst ratifiedIEEE 1363, which establishes the related one-party and two-party forward secrecy properties of various standard key agreement schemes.[10] An encryption system has the property of forward secrecy if plain-text (decrypted) inspection of the data exchange that occurs during key agreement phase of session initiation does not reveal the key that was used to encrypt the remainder of the session. The following is a hypothetical example of a simple instant messaging protocol that employs forward secrecy: Forward secrecy (achieved by generating new session keys for each message) ensures that past communications cannot be decrypted if one of the keys generated in an iteration of step 2 is compromised, since such a key is only used to encrypt a single message. Forward secrecy also ensures that past communications cannot be decrypted if the long-term private keys from step 1 are compromised. However, masquerading as Alice or Bob would be possible going forward if this occurred, possibly compromising all future messages. Forward secrecy is designed to prevent the compromise of a long-term secret key from affecting the confidentiality of past conversations. However, forward secrecy cannot defend against a successfulcryptanalysisof the underlyingciphersbeing used, since a cryptanalysis consists of finding a way to decrypt an encrypted message without the key, and forward secrecy only protects keys, not the ciphers themselves.[11]A patient attacker can capture a conversation whose confidentiality is protected through the use ofpublic-key cryptographyand wait until the underlying cipher is broken (e.g. largequantum computerscould be created which allow thediscrete logarithm problemto be computed quickly), a.k.a.harvest now, decrypt laterattacks. This would allow the recovery of old plaintexts even in a system employing forward secrecy. Non-interactive forward-secure key exchange protocols face additional threats that are not relevant to interactive protocols. In amessage suppressionattack, an attacker in control of the network may itself store messages while preventing them from reaching the intended recipient; as the messages are never received, the corresponding private keys may not be destroyed or punctured, so a compromise of the private key can lead to successful decryption. Proactively retiring private keys on a schedule mitigates, but does not eliminate, this attack. In amalicious key exhaustionattack, the attacker sends many messages to the recipient and exhausts the private key material, forcing a protocol to choose between failing closed (and enablingdenial of serviceattacks) or failing open (and giving up some amount of forward secrecy).[12] Most key exchange protocols areinteractive, requiring bidirectional communication between the parties. A protocol that permits the sender to transmit data without first needing to receive any replies from the recipient may be callednon-interactive, orasynchronous, orzero round trip(0-RTT).[13][14] Interactivity is onerous for some applications—for example, in a secure messaging system, it may be desirable to have astore-and-forwardimplementation, rather than requiring sender and recipient to be online at the same time; loosening the bidirectionality requirement can also improve performance even where it is not a strict requirement, for example at connection establishment or resumption. These use cases have stimulated interest in non-interactive key exchange, and, as forward security is a desirable property in a key exchange protocol, in non-interactive forward secrecy.[15][16]This combination has been identified as desirable since at least 1996.[17]However, combining forward secrecy and non-interactivity has proven challenging;[18]it had been suspected that forward secrecy with protection againstreplay attackswas impossible non-interactively, but it has been shown to be possible to achieve all three desiderata.[14] Broadly, two approaches to non-interactive forward secrecy have been explored,pre-computed keysandpuncturable encryption.[16] With pre-computed keys, many key pairs are created and the public keys shared, with the private keys destroyed after a message has been received using the corresponding public key. This approach has been deployed as part of theSignal protocol.[19] In puncturable encryption, the recipient modifies their private key after receiving a message in such a way that the new private key cannot read the message but the public key is unchanged.Ross J. Andersoninformally described a puncturable encryption scheme for forward secure key exchange in 1997,[20]andGreen & Miers (2015)formally described such a system,[21]building on the related scheme ofCanetti, Halevi & Katz (2003), which modifies the private key according to a schedule so that messages sent in previous periods cannot be read with the private key from a later period.[18]Green & Miers (2015)make use ofhierarchical identity-based encryptionandattribute-based encryption, whileGünther et al. (2017)use a different construction that can be based on any hierarchical identity-based scheme.[22]Dallmeier et al. (2020)experimentally found that modifyingQUICto use a 0-RTT forward secure and replay-resistant key exchange implemented with puncturable encryption incurred significantly increased resource usage, but not so much as to make practical use infeasible.[23] Weak perfect forward secrecy (Wpfs) is the weaker property whereby when agents' long-term keys are compromised, the secrecy of previously established session-keys is guaranteed, but only for sessions in which the adversary did not actively interfere. This new notion, and the distinction between this and forward secrecy was introduced by Hugo Krawczyk in 2005.[24][25]This weaker definition implicitly requires that full (perfect) forward secrecy maintains the secrecy of previously established session keys even in sessions where the adversarydidactively interfere, or attempted to act as a man in the middle. Forward secrecy is present in several protocol implementations, such asSSHand as an optional feature inIPsec(RFC 2412).Off-the-Record Messaging, a cryptography protocol and library for many instant messaging clients, as well asOMEMOwhich provides additional features such as multi-user functionality in such clients, both provide forward secrecy as well asdeniable encryption. InTransport Layer Security(TLS),cipher suitesbased onDiffie–Hellmankey exchange (DHE-RSA, DHE-DSA) andelliptic curve Diffie–Hellmankey exchange (ECDHE-RSA, ECDHE-ECDSA) are available. In theory, TLS can use forward secrecy since SSLv3, but many implementations do not offer forward secrecy or provided it with lower grade encryption.[26]TLS 1.3 removed support for RSA for key exchange, leaving Diffie-Hellman (with forward-secrecy) as the sole algorithm for key exchange.[27] OpenSSLsupports forward secrecy usingelliptic curve Diffie–Hellmansince version 1.0,[28]with a computational overhead of approximately 15% for the initial handshake.[29] TheSignal Protocoluses theDouble Ratchet Algorithmto provide forward secrecy.[30] On the other hand, among popular protocols currently in use,WPA Personaldid not support forward secrecy before WPA3.[31] Since late 2011, Google provided forward secrecy with TLS by default to users of itsGmailservice,Google Docsservice, and encrypted search services.[28]Since November 2013,Twitterprovided forward secrecy with TLS to its users.[32]Wikishosted by theWikimedia Foundationhave all provided forward secrecy to users since July 2014[33]and are requiring the use of forward secrecy since August 2018. Facebook reported as part of an investigation into email encryption that, as of May 2014, 74% of hosts that supportSTARTTLSalso provide forward secrecy.[34]TLS 1.3, published in August 2018, dropped support for ciphers without forward secrecy. As of February 2019[update], 96.6% of web servers surveyed support some form of forward secrecy, and 52.1% will use forward secrecy with most browsers.[35] At WWDC 2016, Apple announced that all iOS apps would need to use App Transport Security (ATS), a feature which enforces the use of HTTPS transmission. Specifically, ATS requires the use of an encryption cipher that provides forward secrecy.[36]ATS became mandatory for apps on January 1, 2017.[37] TheSignalmessaging application employs forward secrecy in its protocol, notably differentiating it from messaging protocols based onPGP.[38] Forward secrecy is supported on 92.6% of websites on modern browsers, while 0.3% of websites do not support forward secrecy at all as of May 2024.[39]
https://en.wikipedia.org/wiki/Perfect_forward_secrecy
Aquantum Turing machine(QTM) oruniversal quantum computeris anabstract machineused tomodelthe effects of aquantum computer. It provides a simple model that captures all of the power of quantum computation—that is, anyquantum algorithmcan be expressed formally as a particular quantum Turing machine. However, the computationally equivalentquantum circuitis a more common model.[1][2]: 2 Quantum Turing machines can be related to classical andprobabilistic Turing machinesin a framework based ontransition matrices. That is, a matrix can be specified whose product with the matrix representing a classical or probabilistic machine provides the quantumprobability matrixrepresenting the quantum machine. This was shown byLance Fortnow.[3] A way of understanding the quantum Turing machine (QTM) is that it generalizes the classicalTuring machine(TM) in the same way that thequantum finite automaton(QFA) generalizes thedeterministic finite automaton(DFA). In essence, the internal states of a classical TM are replaced bypureormixed statesin aHilbert space; the transition function is replaced by a collection ofunitary matricesthat map the Hilbert space to itself.[4] That is, a classical Turing machine is described by a 7-tupleM=⟨Q,Γ,b,Σ,δ,q0,F⟩{\displaystyle M=\langle Q,\Gamma ,b,\Sigma ,\delta ,q_{0},F\rangle }. Seethe formal definition of a Turing Machinefor a more in-depth understanding of each of the elements in this tuple. For a three-tape quantum Turing machine (one tape holding the input, a second tape holding intermediate calculation results, and a third tape holding output): The above is merely a sketch of a quantum Turing machine, rather than its formal definition, as it leaves vague several important details: for example, how often ameasurementis performed; see for example, the difference between a measure-once and a measure-many QFA. This question of measurement affects the way in which writes to the output tape are defined. In 1980 and 1982, physicistPaul Benioffpublished articles[5][6]that first described a quantum mechanical model ofTuring machines. A 1985 article written by Oxford University physicistDavid Deutschfurther developed the idea of quantum computers by suggesting thatquantum gatescould function in a similar fashion to traditional digital computingbinarylogic gates.[4] Iriyama,Ohya, and Volovich have developed a model of alinear quantum Turing machine(LQTM). This is a generalization of a classical QTM that has mixed states and that allows irreversible transition functions. These allow the representation of quantum measurements without classical outcomes.[7] Aquantum Turing machine withpostselectionwas defined byScott Aaronson, who showed that the class of polynomial time on such a machine (PostBQP) is equal to the classical complexity classPP.[8]
https://en.wikipedia.org/wiki/Quantum_Turing_machine
Hierarchical clusteringis one method for findingcommunity structuresin anetwork. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as adendrogram. Hierarchical clustering can either beagglomerativeordivisivedepending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is theGirvan–Newman algorithm. In the hierarchical clustering algorithm, aweightWij{\displaystyle W_{ij}}is first assigned to each pair ofvertices(i,j){\displaystyle (i,j)}in the network. The weight, which can vary depending on implementation (see section below), is intended to indicate how closely related the vertices are. Then, starting with all the nodes in the network disconnected, begin pairing nodes from highest to lowest weight between the pairs (in the divisive case, start from the original network and remove links from lowest to highest weight). As links are added, connected subsets begin to form. These represent the network's community structures. The components at each iterative step are always a subset of other structures. Hence, the subsets can be represented using a tree diagram, ordendrogram. Horizontal slices of the tree at a given level indicate the communities that exist above and below a value of the weight. There are many possible weights for use in hierarchical clustering algorithms. The specific weight used is dictated by the data as well as considerations for computational speed. Additionally, the communities found in the network are highly dependent on the choice of weighting function. Hence, when compared to real-world data with a known community structure, the various weighting techniques have been met with varying degrees of success. Two weights that have been used previously with varying success are the number of node-independent paths between each pair of vertices and the total number of paths between vertices weighted by the length of the path. One disadvantage of these weights, however, is that both weighting schemes tend to separate single peripheral vertices from their rightful communities because of the small number of paths going to these vertices. For this reason, their use in hierarchical clustering techniques is far from optimal.[1] Edgebetweenness centralityhas been used successfully as a weight in theGirvan–Newman algorithm.[1]This technique is similar to a divisive hierarchical clustering algorithm, except the weights are recalculated with each step. The change inmodularityof the network with the addition of a node has also been used successfully as a weight.[2]This method provides a computationally less-costly alternative to the Girvan-Newman algorithm while yielding similar results.
https://en.wikipedia.org/wiki/Hierarchical_clustering_of_networks
Transfer learning(TL) is a technique inmachine learning(ML) in which knowledge learned from a task is re-used in order to boost performance on a related task.[1]For example, forimage classification, knowledge gained while learning torecognizecars could be applied when trying to recognize trucks. This topic is related to the psychological literature ontransfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.[2] Since transfer learning makes use of training with multiple objective functions it is related tocost-sensitive machine learningandmulti-objective optimization.[3] In 1976, Bozinovski and Fulgosi published a paper addressing transfer learning inneural networktraining.[4][5]The paper gives a mathematical and geometrical model of the topic. In 1981, a report considered the application of transfer learning to a dataset of images representing letters of computer terminals, experimentally demonstrating positive and negative transfer learning.[6] In 1992,Lorien Prattformulated the discriminability-based transfer (DBT) algorithm.[7] By 1998, the field had advanced to includemulti-task learning,[8]along with more formal theoretical foundations.[9]Influential publications on transfer learning include the bookLearning to Learnin 1998,[10]a 2009 survey[11]and a 2019 survey.[12] Ngsaid in his NIPS 2016 tutorial[13][14]that TL would become the next driver ofmachine learningcommercial success aftersupervised learning. In the 2020 paper, "Rethinking Pre-Training and self-training",[15]Zoph et al. reported that pre-training can hurt accuracy, and advocate self-training instead. The definition of transfer learning is given in terms of domains and tasks. A domainD{\displaystyle {\mathcal {D}}}consists of: afeature spaceX{\displaystyle {\mathcal {X}}}and amarginal probability distributionP(X){\displaystyle P(X)}, whereX={x1,...,xn}∈X{\displaystyle X=\{x_{1},...,x_{n}\}\in {\mathcal {X}}}. Given a specific domain,D={X,P(X)}{\displaystyle {\mathcal {D}}=\{{\mathcal {X}},P(X)\}}, a task consists of two components: a label spaceY{\displaystyle {\mathcal {Y}}}and an objective predictive functionf:X→Y{\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The functionf{\displaystyle f}is used to predict the corresponding labelf(x){\displaystyle f(x)}of a new instancex{\displaystyle x}. This task, denoted byT={Y,f(x)}{\displaystyle {\mathcal {T}}=\{{\mathcal {Y}},f(x)\}}, is learned from the training data consisting of pairs{xi,yi}{\displaystyle \{x_{i},y_{i}\}}, wherexi∈X{\displaystyle x_{i}\in {\mathcal {X}}}andyi∈Y{\displaystyle y_{i}\in {\mathcal {Y}}}.[16] Given a source domainDS{\displaystyle {\mathcal {D}}_{S}}and learning taskTS{\displaystyle {\mathcal {T}}_{S}}, a target domainDT{\displaystyle {\mathcal {D}}_{T}}and learning taskTT{\displaystyle {\mathcal {T}}_{T}}, whereDS≠DT{\displaystyle {\mathcal {D}}_{S}\neq {\mathcal {D}}_{T}}, orTS≠TT{\displaystyle {\mathcal {T}}_{S}\neq {\mathcal {T}}_{T}}, transfer learning aims to help improve the learning of the target predictive functionfT(⋅){\displaystyle f_{T}(\cdot )}inDT{\displaystyle {\mathcal {D}}_{T}}using the knowledge inDS{\displaystyle {\mathcal {D}}_{S}}andTS{\displaystyle {\mathcal {T}}_{S}}.[16] Algorithms are available for transfer learning inMarkov logic networks[17]andBayesian networks.[18]Transfer learning has been applied to cancer subtype discovery,[19]building utilization,[20][21]general game playing,[22]text classification,[23][24]digit recognition,[25]medical imaging andspam filtering.[26] In 2020, it was discovered that, due to their similar physical natures, transfer learning is possible betweenelectromyographic(EMG) signals from the muscles and classifying the behaviors ofelectroencephalographic(EEG) brainwaves, from thegesture recognitiondomain to the mental state recognition domain. It was noted that this relationship worked in both directions, showing thatelectroencephalographiccan likewise be used to classify EMG.[27]The experiments noted that the accuracy ofneural networksandconvolutional neural networkswere improved[28]through transfer learning both prior to any learning (compared to standard random weight distribution) and at the end of the learning process (asymptote). That is, results are improved by exposure to another domain. Moreover, the end-user of a pre-trained model can change the structure of fully-connected layers to improve performance.[29] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Transfer_learning
InBoolean logic,logical NOR,[1]non-disjunction, orjoint denial[1]is a truth-functional operator which produces a result that is the negation oflogical or. That is, a sentence of the form (pNORq) is true precisely when neitherpnorqis true—i.e. when bothpandqarefalse. It is logically equivalent to¬(p∨q){\displaystyle \neg (p\lor q)}and¬p∧¬q{\displaystyle \neg p\land \neg q}, where the symbol¬{\displaystyle \neg }signifies logicalnegation,∨{\displaystyle \lor }signifiesOR, and∧{\displaystyle \land }signifiesAND. Non-disjunction is usually denoted as↓{\displaystyle \downarrow }or∨¯{\displaystyle {\overline {\vee }}}orX{\displaystyle X}(prefix) orNOR{\displaystyle \operatorname {NOR} }. As with itsdual, theNAND operator(also known as theSheffer stroke—symbolized as either↑{\displaystyle \uparrow },∣{\displaystyle \mid }or/{\displaystyle /}), NOR can be used by itself, without any other logical operator, to constitute a logicalformal system(making NORfunctionally complete). Thecomputerused in the spacecraft that first carried humans to themoon, theApollo Guidance Computer, was constructed entirely using NOR gates with three inputs.[2] TheNOR operationis alogical operationon twological values, typically the values of twopropositions, that produces a value oftrueif and only if both operands are false. In other words, it produces a value offalseif and only if at least one operand is true. Thetruth tableofA↓B{\displaystyle A\downarrow B}is as follows: The logical NOR↓{\displaystyle \downarrow }is the negation of the disjunction: Peirceis the first to show the functional completeness of non-disjunction while he doesn't publish his result.[3][4]Peirce used⋏¯{\displaystyle {\overline {\curlywedge }}}fornon-conjunctionand⋏{\displaystyle \curlywedge }for non-disjunction (in fact, what Peirce himself used is⋏{\displaystyle \curlywedge }and he didn't introduce⋏¯{\displaystyle {\overline {\curlywedge }}}while Peirce's editors made such disambiguated use).[4]Peirce called⋏{\displaystyle \curlywedge }theampheck(from Ancient Greekἀμφήκης,amphēkēs, "cutting both ways").[4] In 1911,Stamm[pl]was the first to publish a description of both non-conjunction (using∼{\displaystyle \sim }, the Stamm hook), and non-disjunction (using∗{\displaystyle *}, the Stamm star), and showed their functional completeness.[5][6]Note that most uses in logical notation of∼{\displaystyle \sim }use this for negation. In 1913,Shefferdescribed non-disjunction and showed its functional completeness. Sheffer used∣{\displaystyle \mid }for non-conjunction, and∧{\displaystyle \wedge }for non-disjunction. In 1935,Webbdescribed non-disjunction forn{\displaystyle n}-valued logic, and use∣{\displaystyle \mid }for the operator. So some people call itWebb operator,[7]Webb operation[8]orWebb function.[9] In 1940,Quinealso described non-disjunction and use↓{\displaystyle \downarrow }for the operator.[10]So some people call the operatorPeirce arroworQuine dagger. In 1944,Churchalso described non-disjunction and use∨¯{\displaystyle {\overline {\vee }}}for the operator.[11] In 1954,BocheńskiusedX{\displaystyle X}inXpq{\displaystyle Xpq}for non-disjunction inPolish notation.[12] APLuses a glyph⍱that combines a∨with a~.[13] NOR is commutative but not associative, which means thatP↓Q↔Q↓P{\displaystyle P\downarrow Q\leftrightarrow Q\downarrow P}but(P↓Q)↓R↮P↓(Q↓R){\displaystyle (P\downarrow Q)\downarrow R\not \leftrightarrow P\downarrow (Q\downarrow R)}.[14] The logical NOR, taken by itself, is afunctionally completeset of connectives.[15]This can be proved by first showing, with atruth table, that¬A{\displaystyle \neg A}is truth-functionally equivalent toA↓A{\displaystyle A\downarrow A}.[16]Then, sinceA↓B{\displaystyle A\downarrow B}is truth-functionally equivalent to¬(A∨B){\displaystyle \neg (A\lor B)},[16]andA∨B{\displaystyle A\lor B}is equivalent to¬(¬A∧¬B){\displaystyle \neg (\neg A\land \neg B)},[16]the logical NOR suffices to define the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}},[16]which is shown to be truth-functionally complete by theDisjunctive Normal Form Theorem.[16] This may also be seen from the fact that Logical NOR does not possess any of the five qualities (truth-preserving, false-preserving,linear,monotonic, self-dual) required to be absent from at least one member of a set offunctionally completeoperators. NOR has the interesting feature that all other logical operators can be expressed by interlaced NOR operations. Thelogical NANDoperator also has this ability. Expressed in terms of NOR↓{\displaystyle \downarrow }, the usual operators of propositional logic are:
https://en.wikipedia.org/wiki/Ampheck
Standard Apple Numerics Environment(SANE) wasApple Computer's software implementation ofIEEE 754floating-point arithmetic. It was available for the6502-basedApple IIandApple IIImodels and came standard with the65816basedApple IIGSand680x0basedMacintoshandLisamodels. Later Macintosh models had hardware floating point arithmetic via68040microprocessorsor68881floating pointcoprocessors, but still included SANE for compatibility with existing software. SANE was replaced by Floating Point C Extensions access to hardware floating point arithmetic in the early 1990s as Apple switched from 680x0 toPowerPCmicroprocessors. ThisClassic Mac OSand/ormacOSsoftware–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Standard_Apple_Numerics_Environment
Inmathematics, theKampé de Fériet functionis a two-variable generalization of thegeneralized hypergeometric series, introduced byJoseph Kampé de Fériet. The Kampé de Fériet function is given by The generalsextic equationcan be solved in terms of Kampé de Fériet functions.[1] Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Kamp%C3%A9_de_F%C3%A9riet_function
Incomputer science, asingle address space operating system(orSASOS) is anoperating systemthat provides only one globally sharedaddress spacefor allprocesses. In a single address space operating system, numerically identical (virtual memory)logical addressesin different processes all refer to exactly the same byte of data.[1] In a traditional OS with private per-process address space, memory protection is based on address space boundaries ("address space isolation"). Single address-space operating systems make translation and protection orthogonal, which in no way weakens protection.[2][3]The core advantage is that pointers (i.e. memory references) have global validity, meaning their meaning is independent of the process using it. This allows sharing pointer-connected data structures across processes, and making them persistent, i.e. storing them on backup store. Someprocessor architectureshave direct support for protection independent of translation. On such architectures, a SASOS may be able to perform context switches faster than a traditional OS. Such architectures includeItanium, and Version 5 of theArm architecture, as well ascapability architecturessuch as CHERI.[4] A SASOS should not be confused with aflat memory model, which provides no address translation and generally no memory protection. In contrast, a SASOS makes protection orthogonal to translation: it may be possible to name a data item (i.e. know its virtual address) while not being able to access it. SASOS projects using hardware-based protection include the following: Related are OSes that provide protection through language-level type safety Thiscomputer-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Single_address_space_operating_system
Thelanguage of mathematicsormathematical languageis an extension of thenatural language(for exampleEnglish) that is used inmathematicsand insciencefor expressing results (scientific laws,theorems,proofs,logical deductions, etc.) with concision, precision and unambiguity. The main features of the mathematical language are the following. The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example, the sentence "afree moduleis amodulethat has abasis" is perfectly correct, although it appears only as a grammatically correct nonsense, when one does not know the definitions ofbasis,module, andfree module. H. B. Williams, anelectrophysiologist, wrote in 1927: Now mathematics is both a body of truth and a special language, a language more carefully defined and more highly abstracted than our ordinary medium of thought and expression. Also it differs from ordinary languages in this important particular: it is subject to rules of manipulation. Once a statement is cast into mathematical form it may be manipulated in accordance with these rules and every configuration of the symbols will represent facts in harmony with and dependent on those contained in the original statement. Now this comes very close to what we conceive the action of the brain structures to be in performing intellectual acts with the symbols of ordinary language. In a sense, therefore, the mathematician has been able to perfect a device through which a part of the labor of logical thought is carried on outside thecentral nervous systemwith only that supervision which is requisite to manipulate the symbols in accordance with the rules.[1]: 291
https://en.wikipedia.org/wiki/Language_of_mathematics
Cabir(also known asCaribe,SybmOS/Cabir,Symbian/CabirandEPOC.cabir) is the name of acomputer wormdeveloped in 2004[1]that is designed to infect mobile phones runningSymbian OS. It is believed to be the first computer worm that can infectmobile phones.[2]When a phone is infected with Cabir, the message "Caribe" is displayed on the phone's display, and is displayed every time the phone is turned on. The worm then attempts to spread to other phones in the area using wirelessBluetoothsignals. Several firms subsequently released tools to remove the worm, the first of which was the Australian business TSG Pacific.[3] The worm can attack and replicate onBluetoothenabledSeries 60phones. The worm tries to send itself to all Bluetooth enabled devices that support the "Object Push Profile", which can also be non-Symbian phones, desktop computers or even printers. The worm spreads as a.sisfile installed in the Apps directory. Cabir does not spread if the user does not accept the file transfer or does not agree with the installation, though some older phones would keep on displaying popups, as Cabir re-sent itself, rendering the UI useless until yes is clicked. Cabir is the first mobile malware ever discovered.[4] While the worm is considered harmless because it replicates but does not perform any other activity, it will result in shortened battery life on portable devices due to constant scanning for other Bluetooth enabled devices. Cabir was named by the employees ofKaspersky Labafter their colleague Elena Kabirova.[5] Mabir, a variant of Cabir, is capable of spreading not only via Bluetooth but also via MMS. By sending out copies of itself as a .sis file over cellular networks, it can affect even users who are outside the 10m range of Bluetooth.
https://en.wikipedia.org/wiki/Cabir_(computer_worm)
Statistical relational learning(SRL) is a subdiscipline ofartificial intelligenceandmachine learningthat is concerned withdomain modelsthat exhibit bothuncertainty(which can be dealt with using statistical methods) and complex,relationalstructure.[1][2]Typically, theknowledge representationformalisms developed in SRL use (a subset of)first-order logicto describe relational properties of a domain in a general manner (universal quantification) and draw uponprobabilistic graphical models(such asBayesian networksorMarkov networks) to model the uncertainty; some also build upon the methods ofinductive logic programming. Significant contributions to the field have been made since the late 1990s.[1] As is evident from the characterization above, the field is not strictly limited to learning aspects; it is equally concerned withreasoning(specificallyprobabilistic inference) andknowledge representation. Therefore, alternative terms that reflect the main foci of the field includestatistical relational learning and reasoning(emphasizing the importance of reasoning) andfirst-order probabilistic languages(emphasizing the key properties of the languages with which models are represented). Another term that is sometimes used in the literature isrelational machine learning(RML). A number of canonical tasks are associated with statistical relational learning, the most common ones being.[3] One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. Since there are countless ways in which such principles can be represented, many representation formalisms have been proposed in recent years.[1]In the following, some of the more common ones are listed in alphabetical order:
https://en.wikipedia.org/wiki/Statistical_relational_learning
Many problems inmathematical programmingcan be formulated asproblems onconvex setsorconvex bodies. Six kinds of problems are particularly important:[1]: Sec.2optimization,violation,validity,separation,membershipandemptiness. Each of these problems has a strong (exact) variant, and a weak (approximate) variant. In all problem descriptions,Kdenotes acompactandconvex setinRn. The strong variants of the problems are:[1]: 47 Closely related to the problems on convex sets is the following problem on aconvex functionf:Rn→R: From the definitions, it is clear that algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: The solvability of a problem crucially depends on the nature ofKand the wayKit is represented. For example: Each of the above problems has a weak variant, in which the answer is given only approximately. To define the approximation, we define the following operations on convex sets:[1]: 6 Using these notions, the weak variants are:[1]: 50 Analogously to the strong variants, algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: Some of these weak variants can be slightly strengthened.[1]: Rem.2.1.5(a)For example, WVAL with inputsc,t' =t+ε/2 andε' =ε/2 does one of the following: Besides these trivial implications, there are highly non-trivial implications, whose proof relies on theellipsoid method. Some of these implications require additional information about theconvex bodyK. In particular, besides the number of dimensionsn, the following information may be needed:[1]: 53 The following can be done in oracle-polynomial time:[1]: Sec.4 The following implications use thepolar setofK- defined asK∗:={y∈Rn:yTx≤1for allx∈K}{\displaystyle K^{*}:=\{y\in \mathbb {R} ^{n}:y^{T}x\leq 1{\text{ for all }}x\in K\}}. Note thatK**=K. Some of the above implications provably do not work without the additional information.[1]: Sec.4.5 Using the above basic problems, one can solve several geometric problems related to convex bodies. In particular, one can find an approximateJohn ellipsoidin oracle-polynomial time:[1]: Sec.4.6 These results imply that it is possible to approximate any norm by anellipsoidal norm. Specifically, suppose a normNis given by aweak norm oracle: for every vectorxinQnand every rationalε>0, it returns a rational numberrsuch that |N(x)-r|<ε. Suppose we also know a constantc1that gives a lower bound on the ratio of N(x) to the Euclidean norm,c1‖x‖≤N(x){\displaystyle c_{1}\|x\|\leq N(x)}Then we can compute in oracle-polynomial time a linear transformationTofRnsuch that, for allxinRn,‖Tx‖≤N(x)≤n(n+1)‖Tx‖{\displaystyle \|Tx\|\leq N(x)\leq {\sqrt {n(n+1)}}\|Tx\|}. It is also possible to approximate thediameterand thewidthofK: Some problems not yet solved (as of 1993) are whether it is possible to compute in polytime the volume, the center of gravity or the surface area of a convex body given by a separation oracle. Some binary operations on convex sets preserve the algorithmic properties of the various problems. In particular, given two convex setsKandL:[1]: Sec.4.7 In some cases, an oracle for a weak problem can be used to solve the corresponding strong problem. An algorithm for WMEM, given circumscribed radiusRand inscribe radiusrand interior pointa0, can solve the following slightly stronger membership problem (still weaker than SMEM): given a vectoryinQn, and a rationalε>0, either assert thatyinS(K,ε), or assert thatynot inK.The proof is elementary and uses a single call to the WMEM oracle.[1]: 108 Suppose now thatKis apolyhedron. Then, many oracles to weak problems can be used to solve the corresponding strong problems in oracle-polynomial time. The reductions require an upper bound on the representation complexity (facet complexityorvertex complexity) of the polyhedron:[1]: Sec. 6.3 The proofs use results onsimultaneous diophantine approximation. How essential is the additional information for the above reductions?[1]: 173 Using the previous results, it is possible to prove implications between strong variants. The following can be done in oracle-polynomial time for awell-described polyhedron- a polyhedron for which an upper bound on therepresentation complexityis known:[1]: Sec.6.4 So SSEP, SVIOL and SOPT are all polynomial-time equivalent. This equivalence, in particular, impliesKhachian's proof thatlinear programmingcan be solved in polynomial time,[1]: Thm.6.4.12since when a polyhedron is given by explicit linear inequalities, a SSEP oracle is trivial to implement. Moreover, a basic optimal dual solution can also be found in polytime.[1]: Thm.6.5.14 Note that the above theorems do not require an assumption of full-dimensionality or a lower bound on the volume. Other reductions cannot be made without additional information: Jain[5]extends one of the above theorems to convex sets that are not polyhedra and not well-described. He only requires a guarantee that the convex set contains at leastone point(not necessarily a vertex) with a bounded representation length. He proves that, under this assumption, SNEMPT can be solved (a point in the convex set can be found) in polytime.[5]: Thm.12Moreover, the representation length of the found point is at most P(n) times the given bound, where P is some polynomial function.[5]: Thm.13 Using the above basic problems, one can solve several geometric problems related to nonempty polytopes and polyhedra with a bound on the representation complexity, in oracle-polynomial time, given an oracle to SSEP, SVIOL or SOPT:[1]: Sec.6.5
https://en.wikipedia.org/wiki/Algorithmic_problems_on_convex_sets
Inquantum field theory,partition functionsaregenerating functionalsforcorrelation functions, making them key objects of study in thepath integral formalism. They are theimaginary timeversions ofstatistical mechanicspartition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, althoughfree theoriesdo admit such solutions. Instead, aperturbativeapproach is usually implemented, this being equivalent to summing overFeynman diagrams. In ad{\displaystyle d}-dimensional field theory with a realscalar fieldϕ{\displaystyle \phi }andactionS[ϕ]{\displaystyle S[\phi ]}, the partition function is defined in the path integral formalism as thefunctional[1] whereJ(x){\displaystyle J(x)}is a fictitioussource current. It acts as a generating functional for arbitrary n-point correlation functions The derivatives used here arefunctional derivativesrather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to apower seriesin source currents is given by[2] Incurved spacetimesthere is an added subtlety that must be dealt with due to the fact that the initialvacuum stateneed not be the same as the final vacuum state.[3]Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals.[4]For example, the partition function for a composite operatorO(x){\displaystyle {\mathcal {O}}(x)}is given by Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weakcouplingperturbatively, which amounts to regular perturbation theory using Feynman diagrams withJ{\displaystyle J}insertions on the external legs.[5]The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identicalJ{\displaystyle J}insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed. By performing aWick transformation, the partition function can be expressed inEuclideanspacetime as[6] whereSE{\displaystyle S_{E}}is the Euclidean action andxE{\displaystyle x_{E}}are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the EuclideanLagrangianis usually bounded from below in which case it can be interpreted as anenergydensity. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory. Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, withantiparticlefields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields. For partition functions withGrassmannvaluedfermion fields, the sources are also Grassmann valued.[7]For example, a theory with a singleDirac fermionψ(x){\displaystyle \psi (x)}requires the introduction of two Grassmann currentsη{\displaystyle \eta }andη¯{\displaystyle {\bar {\eta }}}so that the partition function is Functional derivatives with respect toη¯{\displaystyle {\bar {\eta }}}give fermion fields while derivatives with respect toη{\displaystyle \eta }give anti-fermion fields in the correlation functions. Athermal field theoryattemperatureT{\displaystyle T}is equivalent in Euclidean formalism to a theory with acompactifiedtemporal direction of lengthβ=1/T{\displaystyle \beta =1/T}. Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals This partition function can be taken as the definition of the thermal field theory in imaginary time formalism.[8]Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents The partition function can be solved exactly in free theories bycompleting the squarein terms of the fields. Since a shift by a constant does not affect the path integralmeasure, this allows for separating the partition function into a constant of proportionalityN{\displaystyle N}arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields whereΔF(x−y){\displaystyle \Delta _{F}(x-y)}is the position space Feynmanpropagator This partition function fully determines the free field theory. In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form whereΔD(x−y){\displaystyle \Delta _{D}(x-y)}is the position space Dirac propagator
https://en.wikipedia.org/wiki/Partition_function_(quantum_field_theory)
Process miningis a family of techniques for analyzing event data to understand and improve operational processes. Part of the fields ofdata scienceandprocess management, process mining is generally built onlogsthat contain case id, a unique identifier for a particular process instance; an activity, a description of the event that is occurring; a timestamp; and sometimes other information such as resources, costs, and so on.[1][2] There are three main classes of process mining techniques:process discovery,conformance checking, andprocess enhancement. In the past, terms likeworkflow miningandautomated business process discovery(ABPD)[3]were used. Process mining techniques are often used when no formal description of the process can be obtained by other approaches, or when the quality of existing documentation is questionable.[4]For example, application of process mining methodology to the audit trails of aworkflow management system, the transaction logs of anenterprise resource planningsystem, or theelectronic patient recordsin a hospital can result in models describing processes of organizations.[5]Event log analysis can also be used to compare event logs withpriormodel(s) to understand whether the observations conform to a prescriptive or descriptive model. It is required that the event logs data be linked to a case ID, activities, and timestamps.[6][7] Contemporary management trends such as BAM (business activity monitoring), BOM (business operations management), and BPI (business process intelligence) illustrate the interest in supporting diagnosis functionality in the context ofbusiness process managementtechnology (e.g.,workflow management systemsand otherprocess-awareinformation systems). Process mining is different from mainstreammachine learning,data mining, andartificial intelligencetechniques. For example, process discovery techniques in the field of process mining try to discover end-to-end process models that are able to describe sequential, choice relation, concurrent and loop behavior. Conformance checking techniques are closer tooptimizationthan to traditional learning approaches. However, process mining can be used to generatemachine learning,data mining, andartificial intelligenceproblems. After discovering a process model and aligning the event log, it is possible to create basic supervised and unsupervised learning problems. For example, to predict the remaining processing time of a running case or to identify the root causes of compliance problems. The IEEETask Force on Process Miningwas established in October 2009 as part of the IEEE Computational Intelligence Society.[8]This is a vendor-neutral organization aims to promote the research, development, education and understanding of process mining, make end-users, developers, consultants, and researchers aware of the state-of-the-art in process mining, promote the use of process mining techniques and tools and stimulate new applications, play a role in standardization efforts for logging event data (e.g., XES), organize tutorials, special sessions, workshops, competitions, panels, and develop material (papers, books, online courses, movies, etc.) to inform and guide people new to the field. The IEEE Task Force on Process Mining established the International Process Mining Conference (ICPM) series,[9]lead the development of the IEEE XES standard for storing and exchanging event data[10][11], and wrote the Process Mining Manifesto[12]which was translated into 16 languages. The term "process mining" was coined in a research proposal written by the Dutch computer scientistWil van der Aalst.[13]By 1999, this new field of research emerged under the umbrella of techniques related to data science and process science atEindhoven University. In the early days, process mining techniques were often studied with techniques used forworkflow management. In 2000, the first practical algorithm for process discovery, "Alpha miner"was developed. The next year, research papers introduced "Heuristic miner" a much similar algorithm based on heuristics. More powerful algorithms such asinductive minerwere developed for process discovery. 2004 saw the development of "Token-based replay" forconformance checking. Process mining branched out "performance analysis", "decision mining" and "organizational mining" in 2005 and 2006. In 2007, the first commercial process mining company "Futura Pi" was established. In 2009, theIEEE task force on PMgoverning body was formed to oversee the norms and standards related to process mining. Further techniques for conformance checking led in 2010 toalignment-based conformance checking". In 2011, the first process mining book was published. About 30 commercially available process mining tools were available in 2018[citation needed]. There are three categories of process mining techniques. Process mining software helps organizations analyze and visualize their business processes based on data extracted from various sources, such as transaction logs or event data. This software can identify patterns, bottlenecks, and inefficiencies within a process, enabling organizations to improve their operational efficiency, reduce costs, and enhance their customer experience. In 2025,Gartnerlisted 40 tools in its process mining platform review category.[21]
https://en.wikipedia.org/wiki/Process_mining
Incomputational complexity theory,Yao's principle(also calledYao's minimax principleorYao's lemma) relates the performance ofrandomized algorithmsto deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal: Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.[1] This principle is named afterAndrew Yao, who first proposed it in a 1977 paper.[2]It is closely related to theminimax theoremin the theory ofzero-sum games, and to theduality theory of linear programs. Consider an arbitrary real valued cost measurec(A,x){\displaystyle c(A,x)}of an algorithmA{\displaystyle A}on an inputx{\displaystyle x}, such as its running time, for which we want to study theexpected valueover randomized algorithms and random inputs. Consider, also, afinite setA{\displaystyle {\mathcal {A}}}of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite setX{\displaystyle {\mathcal {X}}}of inputs to these algorithms. LetR{\displaystyle {\mathcal {R}}}denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors inA{\displaystyle {\mathcal {A}}}, and letD{\displaystyle {\mathcal {D}}}denote the class of probability distributions on inputs inX{\displaystyle {\mathcal {X}}}. Then, Yao's principle states that:[1] maxD∈DminA∈AEx∼D[c(A,x)]=minR∈Rmaxx∈XE[c(R,x)].{\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].} Here,E{\displaystyle \mathbb {E} }is notation for the expected value, andx∼D{\displaystyle x\sim D}means thatx{\displaystyle x}is a random variable distributed according toD{\displaystyle D}. Finiteness ofA{\displaystyle {\mathcal {A}}}andX{\displaystyle {\mathcal {X}}}allowsD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}to be interpreted assimplicesofprobability vectors,[3]whosecompactnessimplies that the minima and maxima in these formulas exist.[4] Another version of Yao's principle weakens it from an equality to an inequality, but at the same time generalizes it by relaxing the requirement that the algorithms and inputs come from a finite set. The direction of the inequality allows it to be used when a specific input distribution has been shown to be hard for deterministic algorithms, converting it into alower boundon the cost of all randomized algorithms. In this version, for every inputdistributionD∈D{\displaystyle D\in {\mathcal {D}}},and for every randomizedalgorithmR{\displaystyle R}inR{\displaystyle {\mathcal {R}}},[1]minA∈AEx∼D[c(A,x)]≤maxx∈XE[c(R,x)].{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}That is, the best possible deterministic performance against distributionD{\displaystyle D}is alower boundfor the performance of each randomized algorithmR{\displaystyle R}against its worst-case input. This version of Yao's principle can be proven through the chain of inequalitiesminA∈AEx∼D[c(A,x)]≤Ex∼D[c(R,x)]≤maxx∈XE[c(R,x)],{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \mathbb {E} _{x\sim D}[c(R,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)],}each of which can be shown using onlylinearity of expectationand the principle thatmin≤E≤max{\displaystyle \min \leq \mathbb {E} \leq \max }for all distributions. By avoiding maximization and minimization overD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}, this version of Yao's principle can apply in some cases whereX{\displaystyle {\mathcal {X}}}orA{\displaystyle {\mathcal {A}}}are not finite.[5]Although this direction of inequality is the direction needed for proving lower bounds on randomized algorithms, the equality version of Yao's principle, when it is available, can also be useful in these proofs. The equality of the principle implies that there is no loss of generality in using the principle to prove lower bounds: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.[6] When the costc{\displaystyle c}denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for theexpected timeof anyLas Vegas algorithmon its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct.[7][8]For example, this form of Yao's principle has been used to prove the optimality of certainMonte Carlo tree searchalgorithms for the exact evaluation ofgame trees.[8] The time complexity ofcomparison-based sortingandselection algorithmsis often studied using the number of comparisons between pairs of data elements as a proxy for the total time. When these problems are considered over a fixed set of elements, their inputs can be expressed aspermutationsand a deterministic algorithm can be expressed as adecision tree. In this way both the inputs and the algorithms form finite sets as Yao's principle requires. Asymmetrizationargument identifies the hardest input distributions: they are therandom permutations, the distributions onn{\displaystyle n}distinct elements for which allpermutationsare equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.[2] An example given by Yao is the analysis of algorithms for finding thek{\displaystyle k}th largest of a given set ofn{\displaystyle n}values, the selection problem.[2]Subsequently, to Yao's work, Walter Cunto andIan Munroshowed that, for random permutations, any deterministic algorithm must perform at leastn+min(k,n−k)−O(1){\displaystyle n+\min(k,n-k)-O(1)}expected comparisons.[9]By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input.[10]TheFloyd–Rivest algorithmcomes withinO(nlog⁡n){\displaystyle O({\sqrt {n\log n}})}comparisons of this bound.[11] Another of the original applications by Yao of his principle was to theevasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests.[2]Richard M. Karpconjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.[12] As Yao stated, for graph properties that are true of the empty graph but false for some other graph onn{\displaystyle n}vertices with only a bounded numbers{\displaystyle s}of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being aplanar graph,s=9{\displaystyle s=9}because the 9-edgeutility graphis non-planar. More precisely, Yao states that for these properties, at least(12−p)1s(n2){\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}}tests are needed, for everyε>0{\displaystyle \varepsilon >0}, for a randomized algorithm to have probability at mostp{\displaystyle p}of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a giventreeorcliqueas a subgraph, of containing aperfect matching, and of containing aHamiltonian cycle, for small enough constant error probabilities.[2] Inblack-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".[13]Results that can be proven in this way include the following: Incommunication complexity, an algorithm describes acommunication protocolbetween two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between theaverage-case complexityof deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.[6][14] An example described byAvi Wigderson(based on a paper by Manu Viola) is the communication complexity for two parties, each holdingn{\displaystyle n}-bit input values, to determine which value is larger. For deterministic communication protocols, nothing better thann{\displaystyle n}bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bithash functionsofprefixesof the input to perform a noisybinary searchfor the first position where their inputs differ, achievingO(log⁡n){\displaystyle O(\log n)}bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.[6][15] Yao's principle has also been applied to thecompetitive ratioofonline algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by anoffline algorithmwith access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as thereciprocalof an expected value.[5] An example given byBorodin & El-Yaniv (2005)concernspage replacement algorithms, which respond to requests forpagesof computer memory by using acacheofk{\displaystyle k}pages, for a given parameterk{\displaystyle k}. If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of onepage fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool ofk+1{\displaystyle k+1}pages. Any deterministic online algorith hasnk+1{\displaystyle {\tfrac {n}{k+1}}}expected page faults, overn{\displaystyle n}requests. Instead, an offline algorithm can divide the request sequence into phases within which onlyk{\displaystyle k}pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of thecoupon collector's problem, the expected requests per phase is(k+1)Hk{\displaystyle (k+1)H_{k}}, whereHk=1+12+⋯+1k{\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}}is thek{\displaystyle k}thharmonic number. Byrenewal theory, the offline algorithm incursn(k+1)Hk+o(n){\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)}page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at leastHk{\displaystyle H_{k}}. By Yao's principle,Hk{\displaystyle H_{k}}also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by anoblivious adversaryto be a worst case for the algorithm but without knowledge of the algorithm's random choices.[16] For online problems in a general class related to theski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.[17] Yao's principle may be interpreted ingame theoreticterms, via a two-playerzero-sum gamein which one player,Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithmR{\displaystyle R}may be interpreted as a randomized choice among deterministic algorithms, and thus as amixed strategyfor Alice. Similarly, a non-random algorithm may be thought of as apure strategyfor Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By theminimax theoremofJohn von Neumann, there exists a game valuec{\displaystyle c}, and mixed strategies for each player, such that the players can guarantee expected valuec{\displaystyle c}or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactlyc{\displaystyle c}. Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game valuec{\displaystyle c}as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality.[5]Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.[2] The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player aredual linear programs, whose equality is an instance of linear programming duality.[3]However, although linear programs may be solved inpolynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.[13][14] ForMonte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, forBoolean functionsunder many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.[18] Variants of Yao's principle have also been considered forquantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least23{\displaystyle {\tfrac {2}{3}}}); this condition together withpolynomial timedefines the complexity classBQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in aweaksense that the inputs for which this is true have probability≥23{\displaystyle \geq {\tfrac {2}{3}}}, or in astrongsense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability≥23{\displaystyle \geq {\tfrac {2}{3}}}against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.[19]
https://en.wikipedia.org/wiki/Randomized_algorithms_as_zero-sum_games
Avirtual private server(VPS) orvirtual dedicated server(VDS) is avirtual machinesoldas a serviceby anInternet hostingcompany.[1] A virtual private server runs its own copy of anoperating system(OS), and customers may havesuperuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes, it is functionally equivalent to adedicated physical serverand, being software-defined, can be created and configured more easily. A virtual server costs less than an equivalent physical server. However, as virtual servers share the underlying physical hardware with other VPS, performance may be lower depending on the workload of any other executing virtual machines.[1] The force driving servervirtualizationis similar to that which led to the development oftime-sharingandmultiprogrammingin the past. Although the resources are still shared, as under the time-sharing model, virtualization provides a higher level of security, dependent on the type of virtualization used, as the individual virtual servers are mostly isolated from each other and may run their own full-fledgedoperating systemwhich can be independently rebooted as a virtual instance. Partitioning a single server to appear as multiple servers has been increasingly common onmicrocomputerssince the release ofVMware ESX Serverin 2001.[2]VMware later replaced ESX Server with VMware ESXi, a more lightweight hypervisor architecture that eliminated the Linux-based Console Operating System (COS) used in the original ESX.[3]The physical server typically runs ahypervisorwhich is tasked with creating, releasing, and managing the resources of "guest" operating systems, orvirtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources except for those allocated to it by the hypervisor. As a VPS runs its own copy of its operating system, customers havesuperuser-level access to that operating system instance, and can install almost any software that runs on the OS; however, due to the number of virtualization clients typically running on a single machine, a VPS generally has limited processor time,RAM, and disk space.[4] There are several approaches to virtualization. Inhardware virtualization, ahypervisorsuch as theKernel-based Virtual Machineallows each virtual machine (VM) to run its own independent kernel, providing greater isolation from the host system. By contrast,container-based virtualization—for exampleOpenVZ—shares the host kernel among multiple containers. This can improve resource efficiency, but usually offers less isolation and fewer customization options for each instance.[5] Many companies offer virtual private server hosting or virtualdedicated serverhosting as an extension forweb hostingservices. There are several challenges to consider when licensing proprietary software inmulti-tenantvirtual environments. Withunmanagedorself-managedhosting, the customer is left to administer their own server instance. Unmeteredhosting is generally offered with no limit on the amount of data transferred on a fixed bandwidth line. Usually, unmetered hosting is offered with 10 Mbit/s, 100 Mbit/s, or 1000 Mbit/s (with some as high as 10 Gbit/s). This means that the customer is theoretically able to use approximately 3 TB on 10 Mbit/s or up to approximately 300 TB on a 1000 Mbit/s line per month; although in practice the values will be significantly less. In a virtual private server, this will be shared bandwidth and a fair usage policy should be involved.Unlimitedhosting is also commonly marketed but generallylimitedby acceptable usage policies and terms of service. Offers of unlimited disk space and bandwidth are always false due to cost, carrier capacities, and technological boundaries.
https://en.wikipedia.org/wiki/Virtual_private_server
Network throughput(or justthroughput, when in context) refers to the rate of message delivery over acommunication channelin acommunication network, such asEthernetorpacket radio. The data that these messages contain may be delivered over physical or logical links, or throughnetwork nodes. Throughput is usually measured inbits per second(bit/s, sometimes abbreviated bps), and sometimes inpackets per second(p/sor pps) or data packets pertime slot. Thesystem throughputoraggregate throughputis the sum of the data rates that are delivered over all channels in a network.[1]Throughput representsdigital bandwidthconsumption. The throughput of a communication system may be affected by various factors, including the limitations of the underlying physical medium, available processing power of the system components,end-userbehavior, etc. When taking variousprotocol overheadsinto account, the useful rate of the data transfer can be significantly lower than the maximum achievable throughput; the useful part is usually referred to asgoodput. Users of telecommunications devices, systems designers, and researchers into communication theory are often interested in knowing the expected performance of a system. From a user perspective, this is often phrased as either "which device will get my data there most effectively for my needs?", or "which device will deliver the most data per unit cost?". Systems designers often select the most effective architecture or design constraints for a system, which drive its final performance. In most cases, the benchmark of what a system is capable of, or itsmaximum performanceis what the user or designer is interested in. The termmaximum throughputis frequently used when discussing end-user maximum throughput tests. Maximum throughput is essentially synonymous withdigital bandwidth capacity. Four different values are relevant in the context of maximum throughput are used in comparing theupper limitconceptual performance of multiple systems. They aremaximum theoretical throughput,maximum achievable throughput,peak measured throughput, andmaximum sustained throughput. These values represent different qualities, and care must be taken that the same definitions are used when comparing differentmaximum throughputvalues. Each bit must carry the same amount of information if throughput values are to be compared.Data compressioncan significantly alter throughput calculations, including generating values exceeding 100% in some cases. If the communication is mediated by several links in series with different bit rates, the maximum throughput of the overall link is lower than or equal to the lowest bit rate. The lowest value link in the series is referred to as thebottleneck. Maximum theoretical throughput is closely related to thechannel capacityof the system,[2]and is the maximum possible quantity of data that can be transmitted under ideal circumstances. In some cases, this number is reported as equal to the channel capacity, though this can be deceptive, as only non-packetized systems technologies can achieve this. Maximum theoretical throughput is more accurately reported taking into account format and specificationoverheadwith best-case assumptions. Theasymptotic throughput(less formalasymptotic bandwidth) for a packet-modecommunication networkis the value of themaximum throughputfunction, when the incoming network load approachesinfinity, either due to amessage size,[3]or the number of data sources. As with otherbit ratesanddata bandwidths, the asymptotic throughput is measured inbits per second(bit/s)or (rarely)bytesper second(B/s), where1 B/sis8 bit/s.Decimal prefixesare used, meaning that1 Mbit/sis1000000 bit/s. Asymptotic throughput is usually estimated by sending orsimulatinga very large message (sequence of data packets) through the network, using agreedy sourceand noflow controlmechanism (i.e.,UDPrather thanTCP), and measuring the volume of data received at the destination node. Traffic load between other sources may reduce this maximum network path throughput. Alternatively, a large number of sources and sinks may be modeled, with or without flow control, and the aggregate maximum network throughput measured (the sum of traffic reaching its destinations). In a network simulation model with infinitately large packet queues, the asymptotic throughput occurs when thelatency(the packet queuing time) goes to infinity, while if the packet queues are limited, or the network is a multi-drop network with many sources, and collisions may occur, the packet-dropping rate approaches 100%. A well-known application of asymptotic throughput is in modelingpoint-to-point communicationwheremessage latencyT(N){\displaystyle T(N)}is modeled as a function of message lengthN{\displaystyle N}asT(N)=(M+N)/A{\displaystyle T(N)=(M+N)/A}whereA{\displaystyle A}is the asymptotic bandwidth andM{\displaystyle M}is the half-peak length.[4] As well as its use in general network modeling, asymptotic throughput is used in modeling performance onmassively parallelcomputer systems, where system operation is highly dependent on communication overhead, as well as processor performance.[5]In these applications, asymptotic throughput is used modeling which includes the number of processors, so that both the latency and the asymptotic throughput are functions of the number of processors.[6] Where asymptotic throughput is a theoretical or calculated capacity,peak measured throughputis throughput measured on a real implemented system, or on a simulated system. The value is the throughput measured over a short period of time; mathematically, this is the limit taken with respect to throughput as time approaches zero. This term is synonymous withinstantaneous throughput. This number is useful for systems that rely on burst data transmission; however, for systems with a highduty cycle, this is less likely to be a useful measure of system performance. This value is the throughput averaged or integrated over a long time (sometimes considered infinity). For high duty cycle networks, this is likely to be the most accurate indicator of system performance. The maximum throughput is defined as theasymptotic throughputwhen the load (the amount of incoming data) is large. Inpacket switchedsystems where the load and the throughput always are equal (wherepacket lossdoes not occur), the maximum throughput may be defined as the minimum load inbit/sthat causes the delivery time (thelatency) to become unstable and increase towards infinity. This value can also be used deceptively in relation to peak measured throughput to concealpacket shaping. Throughput is sometimes normalized and measured in percentage, but normalization may cause confusion regarding what the percentage is related to.Channel utilization,channel efficiencyandpacket drop ratein percentage are less ambiguous terms. The channel efficiency, also known asbandwidth utilization efficiency, is the percentage of thenet bit rate(inbit/s) of a digitalcommunication channelthat goes to the actually achieved throughput. For example, if the throughput is70 Mbit/sin a100 Mbit/sEthernet connection, the channel efficiency is 70%. In this example, effectively 70 Mbit of data are transmitted every second. Channel utilization is instead a term related to the use of the channel, disregarding the throughput. It counts not only with the data bits, but also with the overhead that makes use of the channel. The transmission overhead consists of preamble sequences, frame headers and acknowledge packets. The definitions assume a noiseless channel. Otherwise, the throughput would not be only associated with the nature (efficiency) of the protocol, but also to retransmissions resultant from the quality of the channel. In a simplistic approach, channel efficiency can be equal to channel utilization assuming that acknowledge packets are zero-length and that the communications provider will not see any bandwidth relative to retransmissions or headers. Therefore, certain texts mark a difference between channel utilization and protocol efficiency. In a point-to-point orpoint-to-multipoint communicationlink, where only one terminal is transmitting, the maximum throughput is often equivalent to or very near the physical data rate (thechannel capacity), since the channel utilization can be almost 100% in such a network, except for a small inter-frame gap. For example, the maximum frame size in Ethernet is 1526 bytes: up to 1500 bytes for the payload, eight bytes for the preamble, 14 bytes for the header, and 4 bytes for the trailer. An additional minimum interframe gap corresponding to 12 bytes is inserted after each frame. This corresponds to a maximum channel utilization of 1526 / (1526 + 12) × 100% = 99.22%, or a maximum channel use of99.22 Mbit/sinclusive of Ethernet datalink layer protocol overhead in a100 Mbit/sEthernet connection. The maximum throughput or channel efficiency is then 1500 / (1526 + 12) = 97.5%, exclusive of the Ethernet protocol overhead. The throughput of a communication system will be limited by a huge number of factors. Some of these are described below: The maximum achievable throughput (the channel capacity) is affected by the bandwidth in hertz andsignal-to-noise ratioof the analog physical medium. Despite the conceptual simplicity of digital information, all electrical signals traveling over wires are analog. The analog limitations of wires or wireless systems inevitably provide an upper bound on the amount of information that can be sent. The dominant equation here is theShannon–Hartley theorem, and analog limitations of this type can be understood as factors that affect either the analog bandwidth of a signal or as factors that affect the signal-to-noise ratio. The bandwidth of wired systems can be in fact surprisingly[according to whom?]narrow, with the bandwidth of Ethernet wire limited to approximately 1 GHz, and PCB traces limited by a similar amount. Digital systems refer to the 'knee frequency',[7]the amount of time for the digital voltage to rise from 10% of a nominal digital '0' to a nominal digital '1' or vice versa. The knee frequency is related to the required bandwidth of a channel, and can be related to the3 db bandwidthof a system by the equation:[8]F3dB≈K/Tr{\displaystyle \ F_{3dB}\approx K/T_{r}}Where Tr is the 10% to 90% rise time, and K is a constant of proportionality related to the pulse shape, equal to 0.35 for an exponential rise, and 0.338 for a Gaussian rise. Computational systems have finite processing power and can drive finite current. Limited current drive capability can limit the effective signal to noise ratio for highcapacitancelinks. Large data loads that require processing impose data processing requirements on hardware (such as routers). For example, a gateway router supporting a populatedclass B subnet, handling 10 ×100 Mbit/sEthernet channels, must examine 16 bits of address to determine the destination port for each packet. This translates into 81913 packets per second (assuming maximum data payload per packet) with a table of 2^16 addresses this requires the router to be able to perform 5.368 billion lookup operations per second. In a worst-case scenario, where the payloads of each Ethernet packet are reduced to 100 bytes, this number of operations per second jumps to 520 billion. This router would require a multi-teraflop processing core to be able to handle such a load. Ensuring that multiple users can harmoniously share a single communications link requires some kind of equitable sharing of the link. If a bottleneck communication link offering data rateRis shared by "N" active users (with at least one data packet in queue), every user typically achieves a throughput of approximatelyR/N, iffair queuingbest-effortcommunication is assumed. The maximum throughput is often an unreliable measurement of perceived bandwidth, for example the file transmission data rate in bits per seconds. As pointed out above, the achieved throughput is often lower than the maximum throughput. Also, the protocol overhead affects the perceived bandwidth. The throughput is not a well-defined metric when it comes to how to deal with protocol overhead. It is typically measured at a reference point below the network layer and above the physical layer. The simplest definition is the number of bits per second that are physically delivered. A typical example where this definition is practiced is an Ethernet network. In this case, the maximum throughput is thegross bit rateor raw bit rate. However, in schemes that includeforward error correction codes(channel coding), the redundant error code is normally excluded from the throughput. An example inmodemcommunication, where the throughput typically is measured in the interface between thePoint-to-Point Protocol(PPP) and the circuit-switched modem connection. In this case, the maximum throughput is often callednet bit rateor useful bit rate. To determine the actual data rate of a network or connection, the "goodput" measurement definition may be used. For example, in file transmission, the "goodput" corresponds to the file size (in bits) divided by the file transmission time. The "goodput" is the amount of useful information that is delivered per second to theapplication layerprotocol. Dropped packets or packet retransmissions, as well as protocol overhead, are excluded. Because of that, the "goodput" is lower than the throughput. Technical factors that affect the difference are presented in the "goodput" article. Often, a block in adata flow diagramhas a single input and a single output, and operate on discrete packets of information. Examples of such blocks arefast Fourier transformmodules orbinary multipliers. Because the units of throughput are the reciprocal of the unit forpropagation delay, which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as anASICorembedded processorto a communications channel, simplifying system analysis. Inwireless networksorcellular systems, thesystem spectral efficiencyin bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area. Throughput over analog channels is defined entirely by the modulation scheme, the signal-to-noise ratio, and the available bandwidth. Since throughput is normally defined in terms of quantified digital data, the term 'throughput' is not normally used; the term 'bandwidth' is more often used instead.
https://en.wikipedia.org/wiki/Throughput
Countingis the process of determining thenumberofelementsof afinite setof objects; that is, determining thesizeof a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by aunitfor every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related termenumerationrefers to uniquely identifying the elements of afinite(combinatorial)setor infinite set by assigning a number to each element. Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...). There is archaeological evidence suggesting that humans have been counting for at least 50,000 years.[1]Counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members, prey animals, property, or debts (that is,accountancy). Notched bones were also found in the Border Caves in South Africa, which may suggest that the concept of counting was known to humans as far back as 44,000 BCE.[2]The development of counting led to the development ofmathematical notation,numeral systems, andwriting. Verbal counting involves speaking sequential numbers aloud or mentally to track progress. Generally such counting is done withbase 10numbers: "1, 2, 3, 4", etc. Verbal counting is often used for objects that are currently present rather than for counting things over time, since following an interruption counting must resume from where it was left off, a number that has to be recorded or remembered. Counting a small set of objects, especially over time, can be accomplished efficiently withtally marks: making a mark for each number and then counting all of the marks when done tallying. Tallying isbase 1counting. Finger countingis convenient and common for small numbers. Children count on fingers to facilitate tallying and for performing simple mathematical operations. Older finger counting methods used the four fingers and the three bones in each finger (phalanges) to count to twelve.[3]Other hand-gesture systems are also in use, for example the Chinese system by which one can count to 10 using only gestures of one hand. Withfinger binaryit is possible to keep a finger count up to1023 = 210− 1. Various devices can also be used to facilitate counting, such astally countersandabacuses. Inclusive/exclusive counting are two different methods of counting. For exclusive counting, unit intervals are counted at the end of each interval. For inclusive counting, unit intervals are counted beginning with the start of the first interval and ending with end of the last interval. This results in a count which is always greater by one when using inclusive counting, as compared to using exclusive counting, for the same set. Apparently, the introduction of the number zero to the number line resolved this difficulty; however, inclusive counting is still useful for some things. Refer also to thefencepost error, which is a type ofoff-by-one error. Modern mathematical English language usage has introduced another difficulty, however. Because an exclusive counting is generally tacitly assumed, the term "inclusive" is generally used in reference to a set which is actually counted exclusively. For example; How many numbers are included in the set that ranges from 3 to 8, inclusive? The set is counted exclusively, once the range of the set has been made certain by the use of the word "inclusive". The answer is 6; that is 8-3+1, where the +1 range adjustment makes the adjusted exclusive count numerically equivalent to an inclusive count, even though the range of the inclusive count does not include the number eight unit interval. So, it's necessary to discern the difference in usage between the terms "inclusive counting" and "inclusive" or "inclusively", and one must recognize that it's not uncommon for the former term to be loosely used for the latter process. Inclusive counting is usually encountered when dealing with time inRoman calendarsand theRomance languages.[4]In theancient Roman calendar, thenones(meaning "nine") is 8 days before theides; more generally, dates are specified as inclusively counted days up to the next named day.[4] In theChristian liturgical calendar,Quinquagesima(meaning 50) is 49 days before Easter Sunday. When counting "inclusively", the Sunday (the start day) will beday 1and therefore the following Sunday will be theeighth day. For example, the French phrase for "fortnight" isquinzaine(15 [days]), and similar words are present in Greek (δεκαπενθήμερο,dekapenthímero), Spanish (quincena) and Portuguese (quinzena). In contrast, the English word "fortnight" itself derives from "a fourteen-night", as the archaic "sennight" does from "a seven-night"; the English words are not examples of inclusive counting. In exclusive counting languages such as English, when counting eight days "from Sunday", Monday will beday 1, Tuesdayday 2, and the following Monday will be theeighth day.[citation needed]For many years it wasa standard practice in English lawfor the phrase "from a date" to mean "beginning on the day after that date": this practice is now deprecated because of the high risk of misunderstanding.[5] Similar counting is involved inEast Asian age reckoning, in whichnewbornsare considered to be 1 at birth. Musical terminology also uses inclusive counting ofintervalsbetween notes of the standard scale: going up one note is a second interval, going up two notes is a third interval, etc., and going up seven notes is anoctave. Learning to count is an important educational/developmental milestone in most cultures of the world. Learning to count is a child's very first step into mathematics, and constitutes the most fundamental idea of that discipline. However, some cultures in Amazonia and the Australian Outback do not count,[6][7]and their languages do not have number words. Many children at just 2 years of age have some skill in reciting the count list (that is, saying "one, two, three, ..."). They can also answer questions of ordinality for small numbers, for example, "What comes afterthree?". They can even be skilled at pointing to each object in a set and reciting the words one after another. This leads many parents and educators to the conclusion that the child knows how to use counting to determine the size of a set.[8]Research suggests that it takes about a year after learning these skills for a child to understand what they mean and why the procedures are performed.[9][10]In the meantime, children learn how to name cardinalities that they cansubitize. In mathematics, the essence of counting a set and finding a resultn, is that it establishes aone-to-one correspondence(or bijection) of the subject set with the subset of positive integers {1, 2, ...,n}. A fundamental fact, which can be proved bymathematical induction, is that no bijection can exist between {1, 2, ...,n} and {1, 2, ...,m} unlessn=m; this fact (together with the fact that two bijections can becomposedto give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite)combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting." Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ...,n} foranynatural numbern; these are calledinfinite sets, while those sets for which such a bijection does exist (for somen) are calledfinite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets. The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well-understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded. For instance, the set of allintegers(including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless, there are sets, such as the set ofreal numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable". Sets for which there exists a bijection between them are said to have the samecardinality, and in the most general sense counting a set can be taken to mean determining its cardinality. Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outsideset theorythat explicitly studies possible cardinalities). Counting, mostly of finite sets, has various applications in mathematics. One important principle is that if two setsXandYhave the same finite number of elements, and a functionf:X→Yis known to beinjective, then it is alsosurjective, and vice versa. A related fact is known as thepigeonhole principle, which states that if two setsXandYhave finite numbers of elementsnandmwithn>m, then any mapf:X→Yisnotinjective (so there exist two distinct elements ofXthatfsends to the same element ofY); this follows from the former principle, since iffwere injective, then so would itsrestrictionto a strict subsetSofXwithmelements, which restriction would then be surjective, contradicting the fact that forxinXoutsideS,f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example.[citation needed] The domain ofenumerative combinatoricsdeals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set ofpermutationsof {1, 2, ...,n} for any natural numbern.
https://en.wikipedia.org/wiki/Counting
Keyword spotting(or more simply,word spotting) is a problem that was historically first defined in the context ofspeech processing.[1][2]In speech processing, keyword spotting deals with the identification ofkeywordsinutterances. Keyword spotting is also defined as a separate, but related, problem in the context of document image processing.[1]In document image processing, keyword spotting is the problem of finding all instances of a query word that exist in a scanned document image, without fully recognizing it. The first works in keyword spotting appeared in the late 1980s.[2] A special case of keyword spotting is wake word (also called hot word) detection used by personal digital assistants such asAlexaorSirito activate the dormant speaker, in other words "wake up" when their name is spoken. In the United States, theNational Security Agencyhas made use of keyword spotting since at least 2006.[3]This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of suspicious keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest.IARPAfunded research into keyword spotting in theBabel program. Some algorithms used for this task are: Keyword spotting in document image processing can be seen as an instance of the more generic problem ofcontent-based image retrieval(CBIR). Given a query, the goal is to retrieve the most relevant instances of words in a collection of scanned documents.[1]The query may be a text string (query-by-string keyword spotting) or a word image (query-by-example keyword spotting).
https://en.wikipedia.org/wiki/Keyword_spotting
Incomputer engineering, alogic familyis one of two related concepts: Before the widespread use of integrated circuits, various solid-state and vacuum-tube logic systems were used but these were never as standardized and interoperable as the integrated-circuit devices. The most common logic family in modernsemiconductor devicesismetal–oxide–semiconductor(MOS) logic, due to low power consumption,small transistor sizes, and hightransistor density. The list of packaged building-block logic families can be divided into categories, listed here in roughly chronological order of introduction, along with their usual abbreviations: The families RTL, DTL, and ECL were derived from the logic circuits used in early computers, originally implemented usingdiscrete components. One example is thePhilipsNORBITfamily of logic building blocks. The PMOS and I2L logic families were used for relatively short periods, mostly in special purpose customlarge-scale integrationcircuits devices, and are generally considered obsolete. For example, early digital clocks or electronic calculators may have used one or more PMOS devices to provide most of the logic for the finished product. TheF-14 Central Air Data Computer,Intel 4004,Intel 4040, andIntel 8008microprocessorsand their support chips were PMOS. Of these families, only ECL, TTL, NMOS, CMOS, and BiCMOS are currently still in widespread use. ECL is used for very high-speed applications because of its price and power demands, whileNMOS logicis mainly used inVLSIcircuits applications such as CPUs and memory chips which fall outside of the scope of this article. Present-day "building block" logic gate ICs are based on the ECL, TTL, CMOS, and BiCMOS families. Class ofdigital circuitsbuilt usingresistorsas the input network andbipolar junction transistors(BJTs) as switching devices. TheAtanasoff–Berry Computerused resistor-coupledvacuum tubelogic circuits similar to RTL. Several earlytransistorizedcomputers (e.g.,IBM 1620, 1959) used RTL, where it was implemented using discrete components. A family of simple resistor–transistor logic integrated circuits was developed atFairchild Semiconductorfor theApollo Guidance Computerin 1962.Texas Instrumentssoon introduced its own family of RTL. A variant with integrated capacitors, RCTL, had increased speed, but lower immunity to noise than RTL. This was made by Texas Instruments as their "51XX" series. Class of digital circuits in which the logic gating function (e.g., AND) is performed by a diode network and the amplifying function is performed by a transistor. Diode logicwas used with vacuum tubes in the earliest electronic computers in the 1940s includingENIAC. Diode–transistor logic (DTL) was used in theIBM 608, which was the first all-transistorized computer. Early transistorized computers were implemented using discrete transistors, resistors, diodes and capacitors. The first diode–transistor logic family of integrated circuits was introduced bySigneticsin 1962. DTL was also made by Fairchild andWestinghouse. A family of diode logic and diode–transistor logic integrated circuits was developed byTexas Instrumentsfor theD-37CMinuteman II Guidance Computerin 1962, but these devices were not available to the public. A variant of DTL called "high-threshold logic" incorporatedZener diodesto create a large offset between logic 1 and logic 0 voltage levels. These devices usually ran off a 15 volt power supply and were found in industrial control, where the high differential was intended to minimize the effect of noise.[3] P-type MOS (PMOS) logicusesp-channelMOSFETsto implementlogic gatesand otherdigital circuits.N-type MOS (NMOS) logicusesn-channelMOSFETs to implement logic gates and other digital circuits. For devices of equal current driving capability, n-channel MOSFETs can be made smaller than p-channel MOSFETs, due to p-channel charge carriers (holes) having lowermobilitythan do n-channel charge carriers (electrons); also, producing only one type of MOSFET on a silicon substrate is cheaper and technically simpler. These were the driving principles in the design ofNMOS logic, which uses n-channel MOSFETs exclusively. However, neglectingleakage current, NMOS logic consumes power even when no switching is taking place, unlike CMOS logic. The MOSFET invented at Bell Labs between 1955 and 1960 had both pMOS and nMOS devices with a20 μm process.[4][5][6][7][8]Their original MOSFET devices had a gate length of 20μmand agate oxidethickness of100 nm.[9]However, the nMOS devices were impractical, and only the pMOS type were practical working devices.[8]A more practical NMOS process was developed several years later. NMOS was initially faster thanCMOS, thus NMOS was more widely used for computers in the 1970s.[10]With advances in technology, CMOS logic displaced NMOS logic in the mid-1980s to become the preferred process for digital chips. ECL uses an overdrivenbipolar junction transistor(BJT) differential amplifier with single-ended input and limited emitter current. The ECL family, ECL is also known as current-mode logic (CML), was invented by IBM ascurrent steering logicfor use in thetransistorizedIBM 7030 Stretchcomputer, where it was implemented using discrete components. The first ECL logic family to be available in integrated circuits was introduced byMotorolaasMECLin 1962.[11] In TTL logic,bipolar junction transistors(BJTs) perform the logic and amplifying functions. The first transistor–transistor logic family of integrated circuits was introduced bySylvaniaasSylvania Universal High–Level Logic(SUHL) in 1963. Texas Instruments introduced the7400 seriesTTL family in 1964. Transistor–transistor logic usesbipolar transistorsto form its integrated circuits.[12]TTL has changed significantly over the years, with newer versions replacing the older types. Since the transistors of a standard TTL gate are saturated switches, minority carrier storage time in each junction limits the switching speed of the device. Variations on the basic TTL design are intended to reduce these effects and improve speed, power consumption, or both. The German physicistWalter H. Schottkyformulated a theory predicting theSchottky effect, which led to theSchottky diodeand laterSchottky transistors. For the same power dissipation, Schottky transistors have a faster switching speed than conventional transistors because the Schottky diode prevents the transistor from saturating and storing charge; seeBaker clamp. Logic gates built with Schottky transistors switch faster than TTL gates built with ordinary BJTs but consume more power. WithLow-power Schottky(LS), internal resistance values were increased to reduce power consumption and increase switching speed over the original version. The introduction ofAdvanced Low-power Schottky(ALS) further increased speed and reduced power consumption. A faster logic family calledFAST(Fairchild Advanced Schottky TTL) (Schottky) (F) was also introduced that was faster than original Schottky TTL. CMOS logic gates use complementary arrangements of enhancement-mode N-channel and P-channelfield effect transistor. Since the initial devices used oxide-isolated metal gates, they were calledCMOS(complementary metal–oxide–semiconductor logic). In contrast to TTL, CMOS uses almost no power in the static state (that is, when inputs are not changing). A CMOS gate draws no current other than leakage when in a steady 1 or 0 state. When the gate switches states, current is drawn from the power supply to charge the capacitance at the output of the gate. This means that the current draw of CMOS devices increases with switching rate (controlled by clock speed, typically). The first CMOS family of logic integrated circuits was introduced byRCAasCD4000 COS/MOS, the4000 series, in 1968. Initially CMOS logic was slower than LS-TTL. However, because the logic thresholds of CMOS were proportional to the power supply voltage, CMOS devices were well-adapted to battery-operated systems with simple power supplies. CMOS gates can also tolerate much wider voltage ranges than TTL gates because the logic thresholds are (approximately) proportional to power supply voltage, and not the fixed levels required by bipolar circuits. The required silicon area for implementing such digital CMOS functions has rapidly shrunk.VLSI technologyincorporating millions of basic logic operations onto one chip, almost exclusively uses CMOS. The extremely small capacitance of the on-chip wiring caused an increase in performance by several orders of magnitude. On-chip clock rates as high as 4 GHz have become common, approximately 1000 times faster than the technology by 1970. CMOS chips often work with a broader range of power supply voltages than other logic families. Early TTL ICs required apower supplyvoltageof 5V, but early CMOS could use 3 to 15V.[13]Lowering the supply voltage reduces the charge stored on any capacitances and consequently reduces the energy required for a logic transition. Reduced energy implies less heat dissipation. The energy stored in a capacitanceCand changingVvolts is ½CV2. By lowering the power supply from 5V to 3.3V, switching power was reduced by almost 60 percent (power dissipationis proportional to the square of the supply voltage). Many motherboards have avoltage regulator moduleto provide the even lower power supply voltages required by many CPUs. Because of the incompatibility of the CD4000 series of chips with the previous TTL family, a new standard emerged which combined the best of the TTL family with the advantages of the CD4000 family. It was known as the 74HC (which used anywhere from 3.3V to 5V power supplies (and used logic levels relative to the power supply)), and with devices that used 5V power supplies and TTLlogic levels. Interconnecting any two logic families often required special techniques such as additionalpull-up resistors, or purpose-built interface circuits, since the logic families may use differentvoltage levelsto represent 1 and 0 states, and may have other interface requirements only met within the logic family. TTL logic levels are different from those of CMOS – generally a TTL output does not rise high enough to be reliably recognized as a logic 1 by a CMOS input. This problem was solved by the invention of the 74HCT family of devices that uses CMOS technology but TTL input logic levels. These devices only work with a 5V power supply. They form a replacement for TTL, although HCT is slower than original TTL (HC logic has about the same speed as original TTL). Other CMOS circuit families withinintegrated circuitsincludecascode voltage switch logic(CVSL) andpass transistor logic(PTL) of various sorts. These are generally used "on-chip" and are not delivered as building-block medium-scale or small-scale integrated circuits.[14][15] One major improvement was to combine CMOS inputs and TTL drivers to form of a new type of logic devices calledBiCMOS logic, of which the LVT and ALVT logic families are the most important. The BiCMOS family has many members, includingABT logic,ALB logic,ALVT logic,BCT logicandLVT logic. With HC and HCT logic and LS-TTL logic competing in the market it became clear that further improvements were needed to create theideallogic device that combined high speed, with low power dissipation and compatibility with older logic families. A whole range of newer families has emerged that use CMOS technology. A short list of the most important family designators of these newer devices includes: There are many others includingAC/ACT logic,AHC/AHCT logic,ALVC logic,AUC logic,AVC logic,CBT logic,CBTLV logic,FCT logicandLVC logic(LVCMOS). The integrated injection logic (IIL or I2L) usesbipolar transistorsin a current-steering arrangement to implement logic functions.[16]It was used in some integrated circuits, but it is now considered obsolete.[17] The following logic families would either have been used to build up systems from functional blocks such as flip-flops, counters, and gates, or else would be used as "glue" logic to interconnect very-large scale integration devices such as memory and processors. Not shown are some early obscure logic families from the early 1960s such as DCTL (direct-coupled transistor logic), which did not become widely available. Propagation delayis the time taken for a two-input NAND gate to produce a result after a change of state at its inputs.Toggle speedrepresents the fastest speed at which a J-K flip flop could operate.Power per gateis for an individual 2-input NAND gate; usually there would be more than one gate per IC package. Values are very typical and would vary slightly depending on application conditions, manufacturer, temperature, and particular type of logic circuit.Introduction yearis when at least some of the devices of the family were available in volume for civilian uses. Some military applications pre-dated civilian use.[18][19] Several techniques and design styles are primarily used in designing large single-chip application-specific integrated circuits (ASIC) and CPUs, rather than generic logic families intended for use in multi-chip applications. These design styles can typically be divided into two main categories,static techniquesandclocked dynamic techniques. (Seestatic versus dynamic logicfor some discussion on the advantages and disadvantages of each category).
https://en.wikipedia.org/wiki/Logic_family
Locksmithingis the work of creating and bypassing locks. Locksmithing is a traditional trade and in many countries requires completion of anapprenticeship. The level of formal education legally required varies by country, ranging from no formal education to a training certificate awarded by an employer, or a fulldiplomafrom anengineeringcollege, along with time spent as anapprentice. Alockis a mechanism that secures buildings, rooms, cabinets, objects, or other storage facilities. A "smith" is a metalworker who shapes metal pieces, often using aforgeormould, into useful objects or to be part of a more complex structure. Thus locksmithing, as its name implies, is the assembly and designing of locks and their respective keys by hand. Most locksmiths use both automatic and manual cutting tools to mold keys, with many of these tools being powered by batteries or mains electricity. Locks have been constructed for over 2500 years, initially out of wood and later out of metal.[1]Historically, locksmiths would make the entire lock, working for hours hand cutting screws and doing much file-work. Lock designs became significantly more complicated in the 18th century, and locksmiths often specialized in repairing or designing locks. Although replacing lost keys for automobiles and homes, as well as rekeying locks for security purposes, remains an important part of locksmithing, a 1976 US Government publication noted that modern locksmiths are primarily involved in installing high-quality lock-sets and managing keying and key control systems. Most locksmiths also provide electronic lock services, such as programmingsmart keysfor transponder-equipped vehicles and implementingaccess controlsystems to protect individuals and assets for large institutions.[2]Many also specialise in other areas such as: InAustralia, prospective locksmiths are required to take a Technical and Further Education (TAFE) course in locksmithing, completion of which leads to issuance of a Level 3Australian Qualifications Frameworkcertificate, and complete an apprenticeship. They must also pass a criminal records check certifying that they are not currently wanted by the police. Apprenticeships can last one to four years. Course requirements are variable: there is a minimal requirements version that requires fewer total training units, and a fuller version that teaches more advanced skills, but takes more time to complete. Apprenticeship and course availability vary bystate or territory.[3] In Ireland, licensing for locksmiths was introduced in 2016,[4]with locksmiths having to obtain aPrivate Security Authoritylicense. The Irish Locksmith Organisation has 50 members with ongoing training to ensure all members are up-to-date with knowledge and skills. In the UK, there is no current government regulation for locksmithing, so effectively anyone can trade and operate as a locksmith with no skill or knowledge of the industry.[5] Fifteenstatesin theUnited Statesrequire licensure for locksmiths.Nassau CountyandNew York Cityin New York State, andHillsborough CountyandMiami-Dade Countyin Florida have their own licensing laws.[6]State and local laws are described in the table below. 15 states require locksmith licensing: Alabama, California, Connecticut, Illinois, Louisiana, Maryland, Nebraska, New Jersey, Nevada, North Carolina, Oklahoma, Oregon, Tennessee, Texas and Virginia Locksmiths may be commercial (working out of a storefront), mobile (working out of a vehicle), institutional (employed by an institution) or investigatory (forensic locksmiths) or may specialize in one aspect of the skill, such as an automotive lock specialist, a master key system specialist or a safe technician.[2]Many locksmiths also work as security consultants, but not all security consultants possess locksmithing skills. Locksmiths are frequently certified in specific skill areas or to a level of skill within the trade. This is separate from certificates of completion of training courses. In determining skill levels, certifications from manufacturers or locksmith associations are usually more valid criteria than certificates of completion. Some locksmiths decide to call themselves "Master Locksmiths" whether they are fully trained or not, and some training certificates appear quite authoritative. The majority of locksmiths also work on any existing door hardware, not just locking mechanisms. This includes door closers, door hinges, electric strikes, frame repairs and other door hardware. The issue offull disclosurewas first raised in the context of locksmithing, in a 19th-century controversy regarding whether weaknesses in lock systems should be kept secret in the locksmithing community, or revealed to the public. According toA. C. Hobbs: A commercial, and in some respects a social doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear, milkmen knew all about it before, whether they practised it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.
https://en.wikipedia.org/wiki/Locksmith
Instatistical mechanicsandmathematics, aBoltzmann distribution(also calledGibbs distribution[1]) is aprobability distributionorprobability measurethat gives the probability that a system will be in a certainstateas a function of that state's energy and the temperature of the system. The distribution is expressed in the form: wherepiis the probability of the system being in statei,expis theexponential function,εiis the energy of that state, and a constantkTof the distribution is the product of theBoltzmann constantkandthermodynamic temperatureT. The symbol∝{\textstyle \propto }denotesproportionality(see§ The distributionfor the proportionality constant). The termsystemhere has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom[1]to a macroscopic system such as anatural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. Theratioof probabilities of two states is known as theBoltzmann factorand characteristically only depends on the states' energy difference: The Boltzmann distribution is named afterLudwig Boltzmannwho first formulated it in 1868 during his studies of thestatistical mechanicsof gases inthermal equilibrium.[2]Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"[3]The distribution was later investigated extensively, in its modern generic form, byJosiah Willard Gibbsin 1902.[4] The Boltzmann distribution should not be confused with theMaxwell–Boltzmann distributionorMaxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certainstateas a function of that state's energy,[5]while the Maxwell-Boltzmann distributions give the probabilities of particlespeedsorenergiesin ideal gases. The distribution of energies in aone-dimensionalgas however, does follow the Boltzmann distribution. The Boltzmann distribution is aprobability distributionthat gives the probability of a certain state as a function of that state's energy and temperature of thesystemto which the distribution is applied.[6]It is given aspi=1Qexp⁡(−εikT)=exp⁡(−εikT)∑j=1Mexp⁡(−εjkT){\displaystyle p_{i}={\frac {1}{Q}}\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)={\frac {\exp \left(-{\tfrac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} where: UsingLagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes theentropyS(p1,p2,⋯,pM)=−∑i=1Mpilog2⁡pi{\displaystyle S(p_{1},p_{2},\cdots ,p_{M})=-\sum _{i=1}^{M}p_{i}\log _{2}p_{i}} subject to the normalization constraint that∑pi=1{\textstyle \sum p_{i}=1}and the constraint that∑piεi{\textstyle \sum {p_{i}{\varepsilon }_{i}}}equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energiesεi. In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions whereTapproaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in theNISTAtomic Spectra Database.[7] The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for statesiandjis given aspipj=exp⁡(εj−εikT){\displaystyle {\frac {p_{i}}{p_{j}}}=\exp \left({\frac {\varepsilon _{j}-\varepsilon _{i}}{kT}}\right)} where: The corresponding ratio of populations of energy levels must also take theirdegeneraciesinto account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in stateiis practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in statei. This probability is equal to the number of particles in stateidivided by the total number of particles in the system, that is the fraction of particles that occupy statei. whereNiis the number of particles in stateiandNis the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in stateias a function of the energy of that state is[5]NiN=exp⁡(−εikT)∑j=1Mexp⁡(−εjkT){\displaystyle {\frac {N_{i}}{N}}={\frac {\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} This equation is of great importance tospectroscopy. In spectroscopy we observe aspectral lineof atoms or molecules undergoing transitions from one state to another.[5][8]In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state.[9]This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or aforbidden transition. Thesoftmax functioncommonly used in machine learning is related to the Boltzmann distribution: A distribution of the form is calledgeneralized Boltzmann distributionby some authors.[10] The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describecanonical ensemble,grand canonical ensembleandisothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from theprinciple of maximum entropy, but there are other derivations.[10][11] The generalized Boltzmann distribution has the following properties: The Boltzmann distribution appears instatistical mechanicswhen considering closed systems of fixed composition that are inthermal equilibrium(equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: The Boltzmann distribution can be introduced to allocate permits inemissions trading.[13][14]The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as themultinomial logitmodel. As adiscrete choicemodel, this is very well known in economics sinceDaniel McFaddenmade the connection to random utility maximization.[15]
https://en.wikipedia.org/wiki/Boltzmann_distribution
Thesedatasetsare used inmachine learning (ML)research and have been cited inpeer-reviewedacademic journals. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learningalgorithms(such asdeep learning),computer hardware, and, less-intuitively, the availability of high-quality training datasets.[1]High-quality labeled training datasets forsupervisedandsemi-supervisedmachine learningalgorithmsare usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets forunsupervisedlearning can also be difficult and costly to produce.[2][3][4] Many organizations, including governments, publish and share theirdatasets. The datasets are classified, based on the licenses, asOpen dataandNon-Open data. The datasets from variousgovernmental-bodiesare presented inList of open government data sites. The datasets are ported onopen data portals. They are made available for searching, depositing and accessing through interfaces likeOpen API. The datasets are made available as various sorted types and subtypes. The data portal is classified based on its type of license. Theopen source licensebased data portals are known asopen data portalswhich are used by manygovernment organizationsandacademic institutions. https://github.com/sebneu/ckan_instances/blob/master/instances.csv https://dataverse.org/metrics The data portal sometimes lists a wide variety of subtypes of datasets pertaining to manymachine learning applications. The data portals which are suitable for a specific subtype ofmachine learning applicationare listed in the subsequent sections. These datasets consist primarily of text for tasks such asnatural language processing,sentiment analysis, translation, andcluster analysis. Categorization citation analysis These datasets consist of sounds and sound features used for tasks such asspeech recognitionandspeech synthesis. (WAV) Datasets containing electric signal information requiring some sort ofsignal processingfor further analysis. Datasets from physical systems. Datasets from biological systems. Dataset[259] [329][330] [331] This section includes datasets that deals with structured data. Further details are provided in theproject's GitHub repositoryand respectiveHugging Face dataset card. This section includes datasets that contains multi-turn text with at least two actors, a "user" and an "agent". The user makes requests for the agent, which performs the request. Taskmaster-2: 17,289 dialogs in the seven domains (restaurants, food ordering, movies, hotels, flights, music and sports). Taskmaster-3: 23,757 movie ticketing dialogs. Taskmaster-3: conversation id, utterances, vertical, scenario, instructions. For further details check theproject's GitHub repositoryor the Hugging Face dataset cards (taskmaster-1,taskmaster-2,taskmaster-3). Additionally, each ask contains a task definition. Further information is provided in theGitHub repositoryof the project and theHugging Face data card. The dataset can be downloadedhere, and the rejected datahere. The scripts to process the data are available in the GitHub repo mentioned on the paper:https://github.com/google-research/FLAN/tree/main/flan. AnotherFLAN GitHub repowas created as well. This is the one associated with the dataset card in Hugging Face. Mechanisms of AttackDomains of Attack Software DevelopmentHardware Design[permanent dead link]Research Concepts 2009,20102011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022. Data files can also be downloadedhere. Data is also availablehere. Alternate list of reports. Workshops As datasets come in myriad formats and can sometimes be difficult to use, there has been considerable work put into curating and standardizing the format of datasets to make them easier to use for machine learning research.
https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
Paraconsistent logicis a type ofnon-classical logicthat allows for the coexistence of contradictory statements without leading to a logical explosion where anything can be proven true. Specifically, paraconsistent logic is the subfield oflogicthat is concerned with studying and developing "inconsistency-tolerant" systems of logic, purposefully excluding theprinciple of explosion. Inconsistency-tolerant logics have been discussed since at least 1910 (and arguably much earlier, for example in the writings ofAristotle);[1]however, the termparaconsistent("beside the consistent") was first coined in 1976, by thePeruvianphilosopherFrancisco Miró Quesada Cantuarias.[2]The study of paraconsistent logic has been dubbedparaconsistency,[3]which encompasses the school ofdialetheism. Inclassical logic(as well asintuitionistic logicand most other logics), contradictionsentaileverything. This feature, known as theprinciple of explosionorex contradictione sequitur quodlibet(Latin, "from a contradiction, anything follows")[4]can be expressed formally as Which means: ifPand its negation ¬Pare both assumed to be true, then of the two claimsPand (some arbitrary)A, at least one is true. Therefore,PorAis true. However, if we know that eitherPorAis true, and also thatPis false (that ¬Pis true) we can conclude thatA, which could be anything, is true. Thus if atheorycontains a single inconsistency, the theory istrivial– that is, it has every sentence as a theorem. The characteristic or defining feature of a paraconsistent logic is that it rejects the principle of explosion. As a result, paraconsistent logics, unlike classical and other logics, can be used to formalize inconsistent but non-trivial theories. The entailment relations of paraconsistent logics arepropositionallyweakerthanclassical logic; that is, they deemfewerpropositional inferences valid. The point is that a paraconsistent logic can never be a propositional extension of classical logic, that is, propositionally validate every entailment that classical logic does. In some sense, then, paraconsistent logic is more conservative or cautious than classical logic. It is due to such conservativeness that paraconsistent languages can be moreexpressivethan their classical counterparts including the hierarchy ofmetalanguagesdue toAlfred Tarskiand others. According toSolomon Feferman: "natural language abounds with directly or indirectlyself-referentialyet apparently harmless expressions—all of which are excluded from the Tarskian framework."[5]This expressive limitation can be overcome in paraconsistent logic. A primary motivation for paraconsistent logic is the conviction that it ought to be possible to reason with inconsistentinformationin a controlled and discriminating way. The principle of explosion precludes this, and so must be abandoned. In non-paraconsistent logics, there is only one inconsistent theory: the trivial theory that has every sentence as a theorem. Paraconsistent logic makes it possible to distinguish between inconsistent theories and to reason with them. Research into paraconsistent logic has also led to the establishment of the philosophical school ofdialetheism(most notably advocated byGraham Priest), which asserts that true contradictions exist in reality, for example groups of people holding opposing views on various moral issues.[6]Being a dialetheist rationally commits one to some form of paraconsistent logic, on pain of otherwise embracingtrivialism, i.e. accepting that all contradictions (and equivalently all statements) are true.[7]However, the study of paraconsistent logics does not necessarily entail a dialetheist viewpoint. For example, one need not commit to either the existence of true theories or true contradictions, but would rather prefer a weaker standard likeempirical adequacy, as proposed byBas van Fraassen.[8] In classical logic, Aristotle's three laws, namely, the excluded middle (por ¬p), non-contradiction ¬ (p∧ ¬p) and identity (piffp), are regarded as the same, due to the inter-definition of the connectives. Moreover, traditionally contradictoriness (the presence of contradictions in a theory or in a body of knowledge) and triviality (the fact that such a theory entails all possible consequences) are assumed inseparable, granted that negation is available. These views may be philosophically challenged, precisely on the grounds that they fail to distinguish between contradictoriness and other forms of inconsistency. On the other hand, it is possible to derive triviality from the 'conflict' between consistency and contradictions, once these notions have been properly distinguished. The very notions of consistency and inconsistency may be furthermore internalized at the object language level. Paraconsistency involves tradeoffs. In particular, abandoning the principle of explosion requires one to abandon at least one of the following two principles:[9] Both of these principles have been challenged. One approach is to reject disjunction introduction but keep disjunctivesyllogismand transitivity. In this approach, rules ofnatural deductionhold, except fordisjunction introductionandexcluded middle; moreover, inference A⊢B does not necessarily mean entailment A⇒B. Also, the following usual Boolean properties hold:double negationas well asassociativity,commutativity,distributivity,De Morgan, andidempotenceinferences (for conjunction and disjunction). Furthermore, inconsistency-robust proof of negation holds for entailment: (A⇒(B∧¬B))⊢¬A. Another approach is to reject disjunctive syllogism. From the perspective ofdialetheism, it makes perfect sense that disjunctive syllogism should fail. The idea behind this syllogism is that, if¬ A, thenAis excluded andBcan be inferred fromA ∨ B. However, ifAmay hold as well as¬A, then the argument for the inference is weakened. Yet another approach is to do both simultaneously. In many systems ofrelevant logic, as well aslinear logic, there are two separate disjunctive connectives. One allows disjunction introduction, and one allows disjunctive syllogism. Of course, this has the disadvantages entailed by separate disjunctive connectives including confusion between them and complexity in relating them. Furthermore, the rule of proof of negation (below) just by itself is inconsistency non-robust in the sense that the negation of every proposition can be proved from a contradiction. Strictly speaking, having just the rule above is paraconsistent because it is not the case thateveryproposition can be proved from a contradiction. However, if the ruledouble negation elimination(¬¬A⊢A{\displaystyle \neg \neg A\vdash A}) is added as well, then every proposition can be proved from a contradiction. Double negation elimination does not hold forintuitionistic logic. One example of paraconsistent logic is the system known as LP ("Logic of Paradox"), first proposed by theArgentinianlogicianFlorencio González Asenjoin 1966 and later popularized byPriestand others.[10] One way of presenting the semantics for LP is to replace the usualfunctionalvaluation with arelationalone.[11]The binary relationV{\displaystyle V\,}relates aformulato atruth value:V(A,1){\displaystyle V(A,1)\,}means thatA{\displaystyle A\,}is true, andV(A,0){\displaystyle V(A,0)\,}means thatA{\displaystyle A\,}is false. A formula must be assignedat leastone truth value, but there is no requirement that it be assignedat mostone truth value. The semantic clauses fornegationanddisjunctionare given as follows: (The otherlogical connectivesare defined in terms of negation and disjunction as usual.) Or to put the same point less symbolically: (Semantic)logical consequenceis then defined as truth-preservation: Now consider a valuationV{\displaystyle V\,}such thatV(A,1){\displaystyle V(A,1)\,}andV(A,0){\displaystyle V(A,0)\,}but it is not the case thatV(B,1){\displaystyle V(B,1)\,}. It is easy to check that this valuation constitutes acounterexampleto both explosion and disjunctive syllogism. However, it is also a counterexample tomodus ponensfor thematerial conditionalof LP. For this reason, proponents of LP usually advocate expanding the system to include a stronger conditional connective that is not definable in terms of negation and disjunction.[12] As one can verify, LP preserves most other inference patterns that one would expect to be valid, such asDe Morgan's lawsand the usualintroduction and elimination rulesfor negation,conjunction, and disjunction. Surprisingly, thelogical truths(ortautologies) of LP are precisely those of classical propositional logic.[13](LP and classical logic differ only in theinferencesthey deem valid.) Relaxing the requirement that every formula be either true or false yields the weaker paraconsistent logic commonly known asfirst-degree entailment(FDE). Unlike LP, FDE contains no logical truths. LP is only one ofmanyparaconsistent logics that have been proposed.[14]It is presented here merely as an illustration of how a paraconsistent logic can work. One important type of paraconsistent logic isrelevance logic. A logic isrelevantif it satisfies the following condition: It follows that arelevance logiccannot have (p∧ ¬p) →qas a theorem, and thus (on reasonable assumptions) cannot validate the inference from {p, ¬p} toq. Paraconsistent logic has significant overlap withmany-valued logic; however, not all paraconsistent logics are many-valued (and, of course, not all many-valued logics are paraconsistent).Dialetheic logics, which are also many-valued, are paraconsistent, but the converse does not hold. The ideal 3-valued paraconsistent logic given below becomes the logicRM3when the contrapositive is added. Intuitionistic logicallowsA∨ ¬Anot to be equivalent to true, while paraconsistent logic allowsA∧ ¬Anot to be equivalent to false. Thus it seems natural to regard paraconsistent logic as the "dual" of intuitionistic logic. However, intuitionistic logic is a specific logical system whereas paraconsistent logic encompasses a large class of systems. Accordingly, the dual notion to paraconsistency is calledparacompleteness, and the "dual" of intuitionistic logic (a specific paracomplete logic) is a specific paraconsistent system calledanti-intuitionisticordual-intuitionistic logic(sometimes referred to asBrazilian logic, for historical reasons).[15]The duality between the two systems is best seen within asequent calculusframework. While in intuitionistic logic the sequent is not derivable, in dual-intuitionistic logic is not derivable[citation needed]. Similarly, in intuitionistic logic the sequent is not derivable, while in dual-intuitionistic logic is not derivable. Dual-intuitionistic logic contains a connective # known aspseudo-differencewhich is the dual of intuitionistic implication. Very loosely,A#Bcan be read as "Abut notB". However, # is nottruth-functionalas one might expect a 'but not' operator to be; similarly, the intuitionistic implication operator cannot be treated like "¬ (A∧ ¬B)". Dual-intuitionistic logic also features a basic connective ⊤ which is the dual of intuitionistic ⊥: negation may be defined as¬A= (⊤ #A) A full account of the duality between paraconsistent and intuitionistic logic, including an explanation on why dual-intuitionistic and paraconsistent logics do not coincide, can be found in Brunner and Carnielli (2005). These other logics avoid explosion:implicational propositional calculus,positive propositional calculus,equivalential calculusandminimal logic. The latter, minimal logic, is both paraconsistent and paracomplete (a subsystem of intuitionistic logic). The other three simply do not allow one to express a contradiction to begin with since they lack the ability to form negations. Here is an example of athree-valued logicwhich is paraconsistent andidealas defined in "Ideal Paraconsistent Logics" by O. Arieli, A. Avron, and A. Zamansky, especially pages 22–23.[16]The three truth-values are:t(true only),b(both true and false), andf(false only). A formula is true if its truth-value is eithertorbfor the valuation being used. A formula is a tautology of paraconsistent logic if it is true in every valuation which maps atomic propositions to {t,b,f}. Every tautology of paraconsistent logic is also a tautology of classical logic. For a valuation, the set of true formulas is closed undermodus ponensand thededuction theorem. Any tautology of classical logic which contains no negations is also a tautology of paraconsistent logic (by mergingbintot). This logic is sometimes referred to as "Pac" or "LFI1". Some tautologies of paraconsistent logic are: Some tautologies of classical logic which arenottautologies of paraconsistent logic are: Suppose we are faced with a contradictory set of premises Γ and wish to avoid being reduced to triviality. In classical logic, the only method one can use is to reject one or more of the premises in Γ. In paraconsistent logic, we may try to compartmentalize the contradiction. That is, weaken the logic so that Γ→Xis no longer a tautology provided the propositional variableXdoes not appear in Γ. However, we do not want to weaken the logic any more than is necessary for that purpose. So we wish to retain modus ponens and the deduction theorem as well as the axioms which are the introduction and elimination rules for the logical connectives (where possible). To this end, we add a third truth-valuebwhich will be employed within the compartment containing the contradiction. We makeba fixed point of all the logical connectives. We must makeba kind of truth (in addition tot) because otherwise there would be no tautologies at all. To ensure that modus ponens works, we must have that is, to ensure that a true hypothesis and a true implication lead to a true conclusion, we must have that a not-true (f) conclusion and a true (torb) hypothesis yield a not-true implication. If all the propositional variables in Γ are assigned the valueb, then Γ itself will have the valueb. If we giveXthe valuef, then So Γ→Xwill not be a tautology. Limitations: (1) There must not be constants for the truth values because that would defeat the purpose of paraconsistent logic. Havingbwould change the language from that of classical logic. Havingtorfwould allow the explosion again because would be tautologies. Note thatbis not a fixed point of those constants sinceb≠tandb≠f. (2) This logic's ability to contain contradictions applies only to contradictions among particularized premises, not to contradictions among axiom schemas. (3) The loss of disjunctive syllogism may result in insufficient commitment to developing the 'correct' alternative, possibly crippling mathematics. (4) To establish that a formula Γ is equivalent to Δ in the sense that either can be substituted for the other wherever they appear as a subformula, one must show This is more difficult than in classical logic because the contrapositives do not necessarily follow. Paraconsistent logic has been applied as a means of managing inconsistency in numerous domains, including:[17] Logic, as it is classically understood, rests on three main rules (Laws of Thought): TheLaw of Identity(LOI), theLaw of Non-Contradiction(LNC), and theLaw of the Excluded Middle(LEM). Paraconsistent logic deviates from classical logic by refusing to acceptLNC. However, theLNCcan be seen as closely interconnected with theLOIas well as theLEM: LoIstates thatAisA(A≡A). This means thatAis distinct from its opposite or negation (not A, or ¬A). In classical logic this distinction is supported by the fact that whenAis true, its opposite is not. However, without theLNC, bothAandnot Acan be true (A∧¬A), which blurs their distinction. And without distinction, it becomes challenging to define identity. Dropping theLNCthus runs risk to also eliminate theLoI. LEMstates that eitherAornot Aare true (A∨¬A). However, without theLNC, bothAandnot Acan be true (A∧¬A). Dropping theLNCthus runs risk to also eliminate theLEM Hence, dropping theLNCin a careless manner risks losing both theLOIandLEMas well. And droppingallthree classical laws does not just change thekindof logic—it leaves us without any functional system of logic altogether. Loss ofalllogic eliminates the possibility of structured reasoning, A careless paraconsistent logic therefore might run risk of disapproving of any means of thinking other than chaos. Paraconsistent logic aims to evade this danger using careful and precise technical definitions. As a consequence, most criticism of paraconsistent logic also tends to be highly technical in nature (e.g. surrounding questions such as whether a paradox can be true). However, even on a highly technical level, paraconsistent logic can be challenging to argue against. It is obvious that paraconsistent logic leads to contradictions. However, the paraconsistent logician embraces contradictions, including any contradictions that are a part or the result of paraconsistent logic. As a consequence, much of the critique has focused on the applicability and comparative effectiveness of paraconsistent logic. This is an important debate since embracing paraconsistent logic comes at the risk of losing a large amount oftheoremsthat form the basis ofmathematicsandphysics. LogicianStewart Shapiroaimed to make a case for paraconsistent logic as part of his argument for a pluralistic view of logic (the view that different logics are equally appropriate, or equally correct). He found that a case could be made that either,intuitonistic logicas the "One True Logic", or a pluralism ofintuitonistic logicandclassical logicis interesting and fruitful. However, when it comes to paraconsistent logic, he found "no examples that are ... compelling (at least to me)".[30] In "Saving Truth from Paradox",Hartry Fieldexamines the value of paraconsistent logic as a solution toparadoxa.[31]Field argues for a view that avoids both truth gluts (where a statement can be both true and false) and truth gaps (where a statement is neither true nor false). One of Field's concerns is the problem of a paraconsistentmetatheory: If the logic itself allows contradictions to be true, then the metatheory that describes or governs the logic might also have to be paraconsistent. If the metatheory is paraconsistent, then the justification of the logic (why we should accept it) might be suspect, because any argument made within a paraconsistent framework could potentially be both valid and invalid. This creates a challenge for proponents of paraconsistent logic to explain how their logic can be justified without falling into paradox or losing explanatory power.Stewart Shapiroexpressed similar concerns: "there are certain notions and concepts that the dialetheist invokes (informally), but which she cannot adequately express, unless the meta-theory is (completely) consistent. The insistence on a consistent meta-theory would undermine the key aspect of dialetheism"[32] In his book "In Contradiction", which argues in favor of paraconsistent dialetheism,Graham Priestadmits to metatheoretic difficulties: "Is there a metatheory for paraconsistent logics that is acceptable in paraconsistent terms? The answer to this question is not at all obvious."[33] Littmann andKeith Simmonsargued that dialetheist theory is unintelligible: "Once we realize that the theory includes not only the statement '(L) is both true and false' but also the statement '(L) isn't both true and false' we may feel at a loss."[34] Some philosophers have argued against dialetheism on the grounds that the counterintuitiveness of giving up any of the three principles above outweighs any counterintuitiveness that the principle of explosion might have. Others, such asDavid Lewis, have objected to paraconsistent logic on the ground that it is simply impossible for a statement and its negation to be jointly true.[35]A related objection is that "negation" in paraconsistent logic is not reallynegation; it is merely asubcontrary-forming operator.[36] Approaches exist that allow for resolution of inconsistent beliefs without violating any of the intuitive logical principles. Most such systems usemulti-valued logicwithBayesian inferenceand theDempster-Shafer theory, allowing that no non-tautological belief is completely (100%) irrefutable because it must be based upon incomplete, abstracted, interpreted, likely unconfirmed, potentially uninformed, and possibly incorrect knowledge (of course, this very assumption, if non-tautological, entails its own refutability, if by "refutable" we mean "not completely [100%] irrefutable"). Notable figures in the history and/or modern development of paraconsistent logic include:
https://en.wikipedia.org/wiki/Paraconsistent_logic
Inmathematics, and more specifically inhomological algebra, thesplitting lemmastates that in anyabelian category, the following statements areequivalentfor ashort exact sequence If any of these statements holds, the sequence is called asplit exact sequence, and the sequence is said tosplit. In the above short exact sequence, where the sequence splits, it allows one to refine thefirst isomorphism theorem, which states that: to: where the first isomorphism theorem is then just the projection ontoC. It is acategoricalgeneralization of therank–nullity theorem(in the formV ≅ kerT⊕ imT)inlinear algebra. First, to show that 3. implies both 1. and 2., we assume 3. and take astthe natural projection of the direct sum ontoA, and take asuthe natural injection ofCinto the direct sum. Toprovethat 1. implies 3., first note that any member ofBis in the set (kert+imq). This follows since for allbinB,b= (b−qt(b)) +qt(b);qt(b)is inimq, andb−qt(b)is inkert, since Next, theintersectionofimqandkertis 0, since if there existsainAsuch thatq(a) =b, andt(b) = 0, then0 =tq(a) =a; and therefore,b= 0. This proves thatBis the direct sum ofimqandkert. So, for allbinB,bcan be uniquely identified by someainA,kinkert, such thatb=q(a) +k. By exactnesskerr= imq. The subsequenceB⟶C⟶ 0implies thatrisonto; therefore for anycinCthere exists someb=q(a) +ksuch thatc=r(b) =r(q(a) +k) =r(k). Therefore, for anycinC, existskin kertsuch thatc=r(k), andr(kert) =C. Ifr(k) = 0, thenkis inimq; since the intersection ofimqandkert= 0, thenk= 0. Therefore, therestrictionr: kert→Cis an isomorphism; andkertis isomorphic toC. Finally,imqis isomorphic toAdue to the exactness of0 ⟶A⟶B; soBis isomorphic to the direct sum ofAandC, which proves (3). To show that 2. implies 3., we follow a similar argument. Any member ofBis in the setkerr+ imu; since for allbinB,b= (b−ur(b)) +ur(b), which is inkerr+ imu. The intersection ofkerrandimuis0, since ifr(b) = 0andu(c) =b, then0 =ru(c) =c. By exactness,imq= kerr, and sinceqis aninjection,imqis isomorphic toA, soAis isomorphic tokerr. Sinceruis abijection,uis an injection, and thusimuis isomorphic toC. SoBis again the direct sum ofAandC. An alternative "abstract nonsense"proof of the splitting lemmamay be formulated entirely incategory theoreticterms. In the form stated here, the splitting lemma does not hold in the fullcategory of groups, which is not an abelian category. It is partially true: if a short exact sequence of groups is left split or a direct sum (1. or 3.), then all of the conditions hold. For a direct sum this is clear, as one can inject from or project to the summands. For a left split sequence, the mapt×r:B→A×Cgives an isomorphism, soBis a direct sum (3.), and thus inverting the isomorphism and composing with the natural injectionC→A×Cgives an injectionC→Bsplittingr(2.). However, if a short exact sequence of groups is right split (2.), then it need not be left split or a direct sum (neither 1. nor 3. follows): the problem is that the image of the right splitting need not benormal. What is true in this case is thatBis asemidirect product, though not in general adirect product. To form a counterexample, take the smallestnon-abelian groupB≅S3, thesymmetric groupon three letters. LetAdenote thealternating subgroup, and letC=B/A≅ {±1}. Letqandrdenote the inclusion map and thesignmap respectively, so that is a short exact sequence. 3. fails, becauseS3is not abelian, but 2. holds: we may defineu:C→Bby mapping the generator to anytwo-cycle. Note for completeness that 1. fails: any mapt:B→Amust map every two-cycle to theidentitybecause the map has to be agroup homomorphism, while theorderof a two-cycle is 2 which can not be divided by the order of the elements inAother than the identity element, which is 3 asAis the alternating subgroup ofS3, or namely thecyclic groupoforder3. But everypermutationis a product of two-cycles, sotis the trivial map, whencetq:A→Ais the trivial map, not the identity.
https://en.wikipedia.org/wiki/Splitting_lemma
Instatistics,multicollinearityorcollinearityis a situation where thepredictorsin aregression modelarelinearly dependent. Perfect multicollinearityrefers to a situation where thepredictive variableshave anexactlinear relationship. When there is perfect collinearity, thedesign matrixX{\displaystyle X}has less than fullrank, and therefore themoment matrixXTX{\displaystyle X^{\mathsf {T}}X}cannot beinverted. In this situation, theparameter estimatesof the regression are not well-defined, as the system of equations hasinfinitely many solutions. Imperfect multicollinearityrefers to a situation where thepredictive variableshave anearlyexact linear relationship. Contrary to popular belief, neither theGauss–Markov theoremnor the more commonmaximum likelihoodjustification forordinary least squaresrelies on any kind of correlation structure between dependent predictors[1][2][3](although perfect collinearity can cause problems with some software). There is no justification for the practice of removing collinear variables as part of regression analysis,[1][4][5][6][7]and doing so may constitutescientific misconduct. Including collinear variables does not reduce the predictive power orreliabilityof the model as a whole,[6]and does not reduce the accuracy of coefficient estimates.[1] High collinearity indicates that it is exceptionally important to include all collinear variables, as excluding any will cause worse coefficient estimates, strongconfounding, and downward-biased estimates ofstandard errors.[2] To address the high collinearity of a dataset,variance inflation factorcan be used to identify the collinearity of the predictor variables. Perfect multicollinearity refers to a situation where the predictors arelinearly dependent(one can be written as an exact linear function of the others).[8]Ordinary least squaresrequires inverting the matrixXTX{\displaystyle X^{\mathsf {T}}X}, where is anN×(k+1){\displaystyle N\times (k+1)}matrix, whereN{\displaystyle N}is the number of observations,k{\displaystyle k}is the number of explanatory variables, andN≥k+1{\displaystyle N\geq k+1}. If there is an exact linear relationship among the independent variables, then at least one of the columns ofX{\displaystyle X}is a linear combination of the others, and so therankofX{\displaystyle X}(and therefore ofXTX{\displaystyle X^{\mathsf {T}}X}) is less thank+1{\displaystyle k+1}, and the matrixXTX{\displaystyle X^{\mathsf {T}}X}will not be invertible. Perfect collinearity is typically caused by including redundant variables in a regression. For example, a dataset may include variables for income, expenses, and savings. However, because income is equal to expenses plus savings by definition, it is incorrect to include all 3 variables in a regression simultaneously. Similarly, including adummy variablefor every category (e.g., summer, autumn, winter, and spring) as well as an intercept term will result in perfect collinearity. This is known as the dummy variable trap.[9] The other common cause of perfect collinearity is attempting to useordinary least squareswhen working with very wide datasets (those with more variables than observations). These require more advanced data analysis techniques likeBayesian hierarchical modelingto produce meaningful results.[citation needed] Sometimes, the variablesXj{\displaystyle X_{j}}are nearly collinear. In this case, the matrixXTX{\displaystyle X^{\mathsf {T}}X}has an inverse, but it isill-conditioned. A computer algorithm may or may not be able to compute an approximate inverse; even if it can, the resulting inverse may have largerounding errors. The standard measure ofill-conditioningin a matrix is the condition index. This determines if the inversion of the matrix is numerically unstable with finite-precision numbers, indicating the potential sensitivity of the computed inverse to small changes in the original matrix. The condition number is computed by finding the maximumsingular valuedivided by the minimum singular value of thedesign matrix.[10]In the context of collinear variables, thevariance inflation factoris the condition number for a particular coefficient. Numerical problems in estimating can be solved by applying standard techniques fromlinear algebrato estimate the equations more precisely: In addition to causing numerical problems, imperfect collinearity makes precise estimation of variables difficult. In other words, highly correlated variables lead to poor estimates and large standard errors. As an example, say that we notice Alice wears her boots whenever it is raining and that there are only puddles when it rains. Then, we cannot tell whether she wears boots to keep the rain from landing on her feet, or to keep her feet dry if she steps in a puddle. The problem with trying to identify how much each of the two variables matters is that they areconfoundedwith each other: our observations are explained equally well by either variable, so we do not know which one of them causes the observed correlations. There are two ways to discover this information: This confounding becomes substantially worse when researchersattempt to ignore or suppress itby excluding these variables from the regression (see#Misuse). Excluding multicollinear variables from regressions will invalidatecausal inferenceand produce worse estimates by removing important confounders. There are many ways to prevent multicollinearity from affecting results by planning ahead of time. However, these methods all require a researcher to decide on a procedure and analysisbeforedata has been collected (seepost hoc analysisandMulticollinearity § Misuse). Many regression methods are naturally "robust" to multicollinearity and generally perform better thanordinary least squaresregression, even when variables are independent.Regularized regressiontechniques such asridge regression,LASSO,elastic net regression, orspike-and-slab regressionare less sensitive to including "useless" predictors, a common cause of collinearity. These techniques can detect and remove these predictors automatically to avoid problems.Bayesian hierarchical models(provided by software likeBRMS) can perform such regularization automatically, learning informative priors from the data. Often, problems caused by the use offrequentist estimationare misunderstood or misdiagnosed as being related to multicollinearity.[3]Researchers are often frustrated not by multicollinearity, but by their inability to incorporate relevantprior informationin regressions. For example, complaints that coefficients have "wrong signs" or confidence intervals that "include unrealistic values" indicate there is important prior information that is not being incorporated into the model. When this is information is available, it should be incorporated into the prior usingBayesian regressiontechniques.[3] Stepwise regression(the procedure of excluding "collinear" or "insignificant" variables) is especially vulnerable to multicollinearity, and is one of the few procedures wholly invalidated by it (with any collinearity resulting in heavily biased estimates and invalidated p-values).[2] When conducting experiments where researchers have control over the predictive variables, researchers can often avoid collinearity by choosing anoptimal experimental designin consultation with a statistician. While the above strategies work in some situations, estimates using advanced techniques may still produce large standard errors. In such cases, the correct response to multicollinearity is to "do nothing".[1]Thescientific processoften involvesnullor inconclusive results; not every experiment will be "successful" in the sense of decisively confirmation of the researcher's original hypothesis. Edward Leamer notes that "The solution to the weak evidence problem is more and better data. Within the confines of the given data set there is nothing that can be done about weak evidence".[3]Leamer notes that "bad" regression results that are often misattributed to multicollinearity instead indicate the researcher has chosen an unrealisticprior probability(generally theflat priorused inOLS).[3] Damodar Gujaratiwrites that "we should rightly accept [our data] are sometimes not very informative about parameters of interest".[1]Olivier Blanchardquips that "multicollinearity is God's will, not a problem withOLS";[7]in other words, when working withobservational data, researchers cannot "fix" multicollinearity, only accept it. Variance inflation factors are often misused as criteria instepwise regression(i.e. for variable inclusion/exclusion), a use that "lacks any logical basis but also is fundamentally misleading as a rule-of-thumb".[2] Excluding collinear variables leads to artificially small estimates for standard errors, but does not reduce the true (not estimated) standard errors for regression coefficients.[1]Excluding variables with a highvariance inflation factoralso invalidates the calculated standard errors and p-values, by turning the results of the regression into apost hoc analysis.[14] Because collinearity leads to large standard errors and p-values, which can make publishing articles more difficult, some researchers will try tosuppress inconvenient databy removing strongly-correlated variables from their regression. This procedure falls into the broader categories ofp-hacking,data dredging, andpost hoc analysis. Dropping (useful) collinear predictors will generally worsen the accuracy of the model and coefficient estimates. Similarly, trying many different models or estimation procedures (e.g.ordinary least squares, ridge regression, etc.) until finding one that can "deal with" the collinearity creates aforking paths problem. P-values and confidence intervals derived frompost hoc analysesare invalidated by ignoring the uncertainty in themodel selectionprocedure. It is reasonable to exclude unimportant predictors if they are known ahead of time to have little or no effect on the outcome; for example, local cheese production should not be used to predict the height of skyscrapers. However, this must be done when first specifying the model, prior to observing any data, and potentially-informative variables should always be included.
https://en.wikipedia.org/wiki/Multicollinearity#Degenerate_features
Civic intelligenceis an "intelligence" that is devoted to addressing public or civic issues. The term has been applied to individuals and, more commonly, to collective bodies, like organizations, institutions, or societies.[1]Civic intelligence can be used in politics by groups of people who are trying to achieve a common goal.[2]Social movements and political engagement in history might have been partly involved withcollective thinkingand civic intelligence. Education, in its multiple forms, has helped some countries to increase political awareness and engagement by amplifying the civic intelligence of collaborative groups.[3]Increasingly,artificial intelligenceandsocial media, modern innovations of society, are being used by many political entities and societies to tackle problems in politics, the economy, and society at large. Like the termsocial capital,civic intelligencehas been used independently by several people since the beginning of the 20th century. Although there has been little or no direct contact between the various authors, the different meanings associated with the term are generally complementary to each other. The first usage identified was made in 1902 by Samuel T. Dutton,[4]Superintendent of Teachers College Schools on the occasion of the dedication of the Horace Mann School when it noted that "increasing civic intelligence" is a "true purpose of education in this country." More recently, in 1985, David Matthews, president of theKettering Foundation, wrote an article entitledCivic Intelligencein which he discussed the decline of civic engagement in the United States. A still more recent version is Douglas Schuler's "Cultivating Society's Civic Intelligence: Patterns for a New 'World Brain'".[5]In Schuler's version, civic intelligence is applied to groups of people because that is the level where public opinion is formed and decisions are made or at least influenced. It applies to groups, formal or informal, who are working towards civic goals such as environmental amelioration or non-violence among people. This version is related to many other concepts that are currently[when?]receiving a great deal of attention includingcollective intelligence, civic engagement,participatory democracy,emergence,new social movements,collaborative problem-solving, andWeb 2.0. When Schuler developed the Liberating Voices[6]pattern languagefor communication revolution, he made civic intelligence the first of 136 patterns.[7] Civic intelligence is similar[1]toJohn Dewey's "cooperative intelligence" or the "democratic faith" that asserts that "each individual has something to contribute, and the value of each contribution can be assessed only as it entered into the final pooled intelligence constituted by the contributions of all".[8]Civic intelligence is implicitly invoked by the subtitle ofJared Diamond's 2004 book,Collapse: Why Some Societies Choose to Fail or Succeed[9]and to the question posed inThomas Homer-Dixon's 2000 bookIngenuity Gap: How Can We Solve the Problems of the Future?[10]that suggests civic intelligence will be needed if humankind is to stave off problems related to climate change and other potentially catastrophic occurrences. With these meanings, civic intelligence is less a phenomenon to be studied and more of a dynamic process or tool to be shaped and wielded by individuals or groups.[1]Civic intelligence, according to this logic, can affect how society is built and how groups or individuals can utilize it as a tool for collective thinking or action. Civic intelligence sometimes involves large groups of people, but other times it involves only a few individuals. civic intelligence might be more evidently seen in smaller groups when compared to bigger groups due to more intimate interactions and group dynamics.[2] Robert Putnam, who is largely responsible for the widespread consideration of "social capital", has written that social innovation often occurs in response to social needs.[11]This resonates withGeorge Basalla's findings related to technological innovation,[12]which simultaneously facilitates and responds to social innovation. The concept of "civic intelligence," an example of social innovation, is a response to a perceived need. The reception that it receives or doesn't receive will be in proportion to its perceived need by others. Thus, social needs serve as causes for social innovation and collective civic intelligence.[12] Civic intelligence focuses on the role ofcivil societyand the public for several reasons. At a minimum, the public's input is necessary to ratify important decisions made by business or government. Beyond that, however, civil society has originated and provided the leadership for a number of vital social movements. Any inquiry into the nature of civic intelligence is also collaborative and participatory. Civic intelligence is inherently multi-disciplinary and open-ended. Cognitive scientists address some of these issues in the study of "distributed cognition." Social scientists study aspects of it with their work on group dynamics, democratic theory, social systems, and many other subfields. The concept is important in business literature ("organizational learning") and in the study of "epistemic communities" (scientific research communities, notably). Politically, civic intelligence brings people together to form collective thoughts or ideas to solve political problems. Historically,Jane Addamswas an activist who reformed Chicago's cities in terms of housing immigrants, hosting lecture events on current issues, building the first public playground, and conducting research on cultural and political elements of communities around her.[2]She is just one example of how civic intelligence can influence society. Historical movements in America such as those related to human rights, the environment, and economic equity have been started by ordinary citizens, not by governments or businesses.[2]To achieve changes in these topics, people of different backgrounds come together to solve both local andglobal issues. Another example of civic intelligence is how governments in 2015 came together in Paris to formulate a plan to curbgreenhouse gas emissionand alleviate some effects ofglobal warming.[2] Politically, no atlas of civic intelligence exists, yet the quantity and quality of examples worldwide is enormous. While a comprehensive "atlas" is not necessarily a goal, people are currently developing online resources to record at least some small percentage of these efforts. The rise in the number of transnational advocacy networks,[13]the coordinated worldwide demonstrations protesting the invasion of Iraq,[14]and theWorld Social Forumsthat provided "free space" for thousands of activists from around the world,[15]all support the idea that civic intelligence is growing. Although smaller in scope, efforts like the work of theFriends of Naturegroup to create a "Green Map" ofBeijingare also notable. Political engagement of citizens sometimes comes from the collective intelligence of engaging local communities through political education.[2]Tradition examples of political engagement includes voting, discussing issues with neighbors and friends, working for a political campaign, attending rallies, forming political action groups, etc. Today, social and economic scientists such as Jason Corburn andElinor Ostromcontinue to analyze how people come together to achieve collective goals such as sharing natural resources, combating diseases, formulating political action plans, and preserving the natural environment.[16] From one study, the author suggests that it might be helpful for educational facilities such as colleges or even high schools to educate students on the importance of civic intelligence in politics so that better choices could be made when tackling societal issues through a collective citizen intelligence.[17]Harry C. Boyte, in an article he wrote, argues that schools serve as a sort of "free space" for students to engage in community engagement efforts as describe above.[17]Schools, according to Boyte, empower people to take actions in their communities, thus rallying increasing number of people to learn about politics and form political opinions. He argues that this chain reaction is what then leads to civic intelligence and the collective effort to solve specific problems in local communities. It is shown by one study that citizens who are more informed and more attentive to the world of politics around them are more politically engaged both at the local and national level.[18]One study, aggregating the results of 70 articles about political awareness, finds that political awareness is important in the onset of citizen participation and voicing opinion.[18]In recent years and the modern world, there is a shift in how citizens stay informed and become attentive to the political world. Although traditional political engagement methods are still being used by most individuals, particularly older people, there is a trend shifting towards social media and the internet in terms of political engagement and civic intelligence.[19] Civic intelligence is involved in economic policymaking and decision-making around the world. According to one article, community members in Olympia, Washington worked with local administrations and experts onaffordable housingimprovements in the region.[20]This collaboration utilized the tool of civic intelligence. In addition, the article argues that nonprofit organizations can facilitate local citizen participation in discussions about economic issues such as public housing, wage rates, etc.[20]In Europe, according to RSA's report on Citizens' Economic Council, democratic participation and discussions have positive impacts on economic issues in society such as poverty, housing situations, the wage gap, healthcare, education, food availability, etc. The report emphasizes citizen empowerment, clarity and communication, and building legitimacy around economic development.[21]The RSA's economic council is working towards enforcing more crowdsourced economic ideas and increasing the expertise level of fellows who will advise policymakers on engaging citizens in the economy. The report argues that increasing citizen engagement makes governments more legitimate through increased public confidence, stockholder engagement, and government political commitment.[21]Ideas such as creating citizen juries, citizen reference panels, and the devolution process of policymaking are explored in more depth in the report. Collective civic intelligence is seen as a tool by the RSA to improve economic issues in society.[21] Globally, civic participation and intelligence interact with the needs of businesses and governments. One study finds that increased local economic concentration is correlated with decreased levels of civic engagement because citizen's voices are covered up by the needs of corporations.[22]In this situation, governments overvalue the needs of big corporations when compared to the needs of groups of individual citizens. This study points out that corporations can negatively impact civic intelligence if citizens are not given enough freedom to voice their opinions regarding economic issues. The study shows that the US has faced civic disengagement in the past three decades due to monopolizations of opinions by corporations.[22]On the other hand, if a government supports local capitalism and civic engagement equally, there might be beneficial socioeconomic outcomes such as more income equality, less poverty, and less unemployment.[23]The article adds that in a period of global development, local forces of civic intelligence and innovation will likely benefit citizen's lives and distinguish one region from another in terms of socioeconomic status.[23]The concept of civic health is introduced by one study as a key component to the wellbeing of local or national economy. According to the article, civic engagement can increase citizens's professional employment skills, foster a sense of trust in communities, and allow a greater amount of community investment from citizens themselves.[24] One recent prominent example of civic intelligence in the modern world is the creation and improvements of artificial intelligence. According to one article, AI enables people to propose solutions, communicate with each other more effectively, obtain data for planning, and tackle society issues from across the world.[25]In 2018, at the second annual AI for Good Global summit, industry leaders, policymakers, research scientists, AI enthusiasts all came together to formulate plans and ideas regarding how to use artificial intelligence to solve modern society issues, including political problems in countries of different backgrounds.[25]The summit proposed ideas regarding how AI can benefit safety, health, and city governance. The article mentions that in order for artificial intelligence to achieve effective use in society, researchers, policymakers, community members, and technology companies all need to work together to improve artificial intelligence. With this logic, it takes coordinated civic intelligence to make artificial intelligence work. There are some shortcomings to artificial intelligence. According to one report, AI is increasingly beingused by governmentsto limit civil freedom of citizens through authoritarian regimes and restrictive regulations.[26]Technology and the use of automated systems are used by powerful governments to dismiss civic intelligence. There is also the concern for losing civic intelligence and human jobs if AI was to replace many sectors of the economy and political landscapes around the world.[27]AI has the dangerous possibility of getting out of control and self-replicate destructive behaviors that might be detrimental to society.[27] However, according to one article, if world communities work together to form international standards, improve AI regulation policies, and educate people about AI, political and civil freedom might be more easily achieved.[28] Recent shifts towards modern technology, social media, and the internet influence how civic intelligence interact with politics in the world.[29]New technologies expand the reach of data and information to more people, and citizens can engage with each other or the government more openly through the internet.[30]Civic intelligence can take a form of increased presence among groups of individuals, and the speed of civic intelligence onset is intensified as well.[29] The internet and social media play roles in civic intelligence. Social Medias like Facebook, Twitter, and Reddit became popular sites for political discoveries, and many people, especially younger adults, choose to engage with politics online.[19]There are positive effects of social media on civic engagement. According to one article, social media has connected people in unprecedented ways. People now find it easier to form democratic movements, engage with each other and politicians, voice opinions, and take actions virtually.[3]Social media has been incorporated into people's lives, and many people obtain news and other political ideas from online sources.[3] One study explains that social media increase political participation through more direct forms of democracy and bottom-up approach of solving political, social, or economical issues.[29]The idea is that social media will lead people to participate politically in novel ways other than traditional actions of voting, attending rallies, and supporting candidates in real life. The study argues that this leads to new ways of enacting civic intelligence and political participation.[29]Thus, the study points out that social media is designed to gather civic intelligence at one place, the internet. A third article featuring an Italian case study finds that civic collaboration is important in helping a healthy government function in both local and national communities.[30]The article explains that there seems to be more individualized political actions and efforts when people choose to innovate new ways of political participation. Thus, one group's actions of political engagement might be entirely different than those of another group. However, social media also has some negative effects on civic intelligence in politics or economics. One study explains that even though social media might have increased direct citizen participation in politics and economics, it might have also opened more room formisinformationandecho chambers.[31]More specifically, trolling, the spread of false political information, stealing of person data, and usage of bots to spread propaganda are all examples of negative consequences of internet and social media.[3]These negative results, along the lines of the article, influence civic intelligence negatively because citizens have trouble discovering the lies from the truths in the political arena. Thus, civic intelligence would either be misleading or vanish altogether if a group is using false sources or misleading information.[3]A second article points out that a filter bubble is created through group isolation as a result ofgroup polarization.[31]False information and deliberate deception of political agendas play a major role in formingfilter bubblesof citizens. People are conditioned to believe what they want to believe, so citizens who focus more on one-sided political news might form one's own filter bubble.[31]Next, a research journal found that Twitter increases political knowledge of users while Facebook decrease the political knowledge of users.[19]The journal points out that different social media platforms can affect users differently in terms of political awareness and civic intelligence. Thus, social media might have uncertain political effects on civic intelligence.[19]
https://en.wikipedia.org/wiki/Civic_intelligence
Incontrol theory, astate observer,state estimator, orLuenberger observeris a system that provides anestimateof theinternal stateof a given real system, from measurements of theinputand output of the real system. It is typically computer-implemented, and provides the basis of many practical applications. Knowing the system state is necessary to solve manycontrol theoryproblems; for example, stabilizing a system usingstate feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system isobservable, it is possible to fully reconstruct the system state from its output measurements using the state observer. Linear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections. The state of a linear, time-invariant discrete-time system is assumed to satisfy where, at timek{\displaystyle k},x(k){\displaystyle x(k)}is the plant's state;u(k){\displaystyle u(k)}is its inputs; andy(k){\displaystyle y(k)}is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms ofdiscretetime steps, very similar equations hold forcontinuoussystems). If this system isobservablethen the output of the plant,y(k){\displaystyle y(k)}, can be used to steer the state of the state observer. The observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrixL{\displaystyle L}; this is then added to the equations for the state of the observer to produce a so-calledLuenbergerobserver, defined by the equations below. Note that the variables of a state observer are commonly denoted by a "hat":x^(k){\displaystyle {\hat {x}}(k)}andy^(k){\displaystyle {\hat {y}}(k)}to distinguish them from the variables of the equations satisfied by the physical system. The observer is called asymptotically stable if the observer errore(k)=x^(k)−x(k){\displaystyle e(k)={\hat {x}}(k)-x(k)}converges to zero whenk→∞{\displaystyle k\to \infty }. For a Luenberger observer, the observer error satisfiese(k+1)=(A−LC)e(k){\displaystyle e(k+1)=(A-LC)e(k)}. The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrixA−LC{\displaystyle A-LC}has all the eigenvalues inside the unit circle. For control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrixK{\displaystyle K}. The observer equations then become: or, more simply, Due to theseparation principlewe know that we can chooseK{\displaystyle K}andL{\displaystyle L}independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observerA−LC{\displaystyle A-LC}are usually chosen to converge 10 times faster than the poles of the systemA−BK{\displaystyle A-BK}. The previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gainsL{\displaystyle L}are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., whenA−LC{\displaystyle A-LC}is aHurwitz matrix). For a continuous-time linear system wherex∈Rn,u∈Rm,y∈Rr{\displaystyle x\in \mathbb {R} ^{n},u\in \mathbb {R} ^{m},y\in \mathbb {R} ^{r}}, the observer looks similar to discrete-time case described above: The observer errore=x−x^{\displaystyle e=x-{\hat {x}}}satisfies the equation The eigenvalues of the matrixA−LC{\displaystyle A-LC}can be chosen arbitrarily by appropriate choice of the observer gainL{\displaystyle L}when the pair[A,C]{\displaystyle [A,C]}is observable, i.e.observabilitycondition holds. In particular, it can be made Hurwitz, so the observer errore(t)→0{\displaystyle e(t)\to 0}whent→∞{\displaystyle t\to \infty }. When the observer gainL{\displaystyle L}is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use).[1]As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example,sliding mode controlcan be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to aKalman filter.[2][3]Another approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable.[4] High gain, sliding mode and extended observers are the most common observers for nonlinear systems. To illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system: wherex∈Rn{\displaystyle x\in \mathbb {R} ^{n}}. Also assume that there is a measurable outputy∈R{\displaystyle y\in \mathbb {R} }given by There are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is, One suggestion by Krener and Isidori[5]and Krener and Respondek[6]can be applied in a situation when there exists a linearizing transformation (i.e., adiffeomorphism, like the one used infeedback linearization)z=Φ(x){\displaystyle z=\Phi (x)}such that in new variables the system equations read The Luenberger observer is then designed as The observer error for the transformed variablee=z^−z{\displaystyle e={\hat {z}}-z}satisfies the same equation as in classical linear case. As shown by Gauthier, Hammouri, and Othman[7]and Hammouri and Kinnaert,[8]if there exists transformationz=Φ(x){\displaystyle z=\Phi (x)}such that the system can be transformed into the form then the observer is designed as whereL(t){\displaystyle L(t)}is a time-varying observer gain. Ciccarella, Dalla Mora, and Germani[9]obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity. As discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer,[10]fixed time observer,[11]switched high gain observer[12]and uniting observer.[13]Thesliding mode observeruses non-linear high-gain feedback to drive estimated states to ahypersurfacewhere there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like thesignum(i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectoriesslide alonga curve where the estimated output matches the measured output exactly. So, if the system isobservablefrom its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to theKalman filterbut with simpler implementation.[2][3] As suggested by Drakunov,[14]asliding mode observercan also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimatex^{\displaystyle {\hat {x}}}and has the form where: The idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the functionsgn⁡(vi(t)−hi(x^(t))){\displaystyle \operatorname {sgn}(v_{i}(t)\!-\!h_{i}({\hat {x}}(t)))}should be replaced by equivalent values (seeequivalent controlin the theory ofsliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time. The modified observation error can be written in the transformed statese=H(x)−H(x^){\displaystyle e=H(x)-H({\hat {x}})}. In particular, and so So: So, for sufficiently largemi{\displaystyle m_{i}}gains, all observer estimated states reach the actual states in finite time. In fact, increasingmi{\displaystyle m_{i}}allows for convergence in any desired finite time so long as each|hi(x(0))|{\displaystyle |h_{i}(x(0))|}function can be bounded with certainty. Hence, the requirement that the mapH:Rn→Rn{\displaystyle H:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}is adiffeomorphism(i.e., that itsJacobian linearizationis invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition. In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that does not depend on time. The observer is then Multi-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation.[4]The idea of multiple models was previously applied to obtain information in adaptive control.[15] Assuming that the number of high-gain observers equalsn+1{\displaystyle n+1}, wherek=1,…,n+1{\displaystyle k=1,\dots ,n+1}is the observer index. The first layer observers consists of the same gainL{\displaystyle L}but they differ with the initial statexk(0){\displaystyle x_{k}(0)}. In the second layer allxk(t){\displaystyle x_{k}(t)}fromk=1...n+1{\displaystyle k=1...n+1}observers are combined into one to obtain single state vector estimation whereαk∈R{\displaystyle \alpha _{k}\in \mathbb {R} }are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process. Let assume that and whereξk∈Rn×1{\displaystyle \xi _{k}\in \mathbb {R} ^{n\times 1}}is some vector that depends onkth{\displaystyle kth}observer errorek(t){\displaystyle e_{k}(t)}. Some transformation yields to linear regression problem This formula gives possibility to estimateαk(t){\displaystyle \alpha _{k}(t)}. To construct manifold we need mappingm:Rn→Rn{\displaystyle m:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}betweenξk(t)=m(ek(t)){\displaystyle \xi _{k}(t)=m(e_{k}(t))}and ensurance thatξk(t){\displaystyle \xi _{k}(t)}is calculable relying on measurable signals. First thing is to eliminate parking phenomenon forαk(t){\displaystyle \alpha _{k}(t)}from observer error Calculaten{\displaystyle n}times derivative onηk(t)=y^k(t)−y(t){\displaystyle \eta _{k}(t)={\hat {y}}_{k}(t)-y(t)}to find mapping m lead toξk(t){\displaystyle \xi _{k}(t)}defined as wheretd>0{\displaystyle t_{d}>0}is some time constant. Note thatξk(t){\displaystyle \xi _{k}(t)}relays on bothηk(t){\displaystyle \eta _{k}(t)}and its integrals hence it is easily available in the control system. Furtherαk(t){\displaystyle \alpha _{k}(t)}is specified by estimation law; and thus it proves that manifold is measurable. In the second layerα^k(t){\displaystyle {\hat {\alpha }}_{k}(t)}fork=1…n+1{\displaystyle k=1\dots n+1}is introduced as estimates ofαk(t){\displaystyle \alpha _{k}(t)}coefficients. The mapping error is specified as whereeξ(t)∈Rn×1,α^k(t)∈R{\displaystyle e_{\xi }(t)\in \mathbb {R} ^{n\times 1},{\hat {\alpha }}_{k}(t)\in \mathbb {R} }. If coefficientsα^(t){\displaystyle {\hat {\alpha }}(t)}are equal toαk(t){\displaystyle \alpha _{k}(t)}, then mapping erroreξ(t)=0{\displaystyle e_{\xi }(t)=0}Now it is possible to calculatex^{\displaystyle {\hat {x}}}from above equation and hence the peaking phenomenon is reduced thanks to properties of manifold. The created mapping gives a lot of flexibility in the estimation process. Even it is possible to estimate the value ofx(t){\displaystyle x(t)}in the second layer and to calculate the statex{\displaystyle x}.[4] Bounding[16]or interval observers[17][18]constitute a class of observers that provide two estimations of the state simultaneously: one of the estimations provides an upper bound on the real value of the state, whereas the second one provides a lower bound. The real value of the state is then known to be always within these two estimations. These bounds are very important in practical applications,[19][20]as they make possible to know at each time the precision of the estimation. Mathematically, two Luenberger observers can be used, ifL{\displaystyle L}is properly selected, using, for example,positive systemsproperties:[21]one for the upper boundx^U(k){\displaystyle {\hat {x}}_{U}(k)}(that ensures thate(k)=x^U(k)−x(k){\displaystyle e(k)={\hat {x}}_{U}(k)-x(k)}converges to zero from above whenk→∞{\displaystyle k\to \infty }, in the absence of noise anduncertainty), and a lower boundx^L(k){\displaystyle {\hat {x}}_{L}(k)}(that ensures thate(k)=x^L(k)−x(k){\displaystyle e(k)={\hat {x}}_{L}(k)-x(k)}converges to zero from below). That is, alwaysx^U(k)≥x(k)≥x^L(k){\displaystyle {\hat {x}}_{U}(k)\geq x(k)\geq {\hat {x}}_{L}(k)}
https://en.wikipedia.org/wiki/State_estimator
Deductive reasoningis the process of drawing validinferences. An inference isvalidif its conclusion followslogicallyfrom itspremises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socratesis a man" to the conclusion "Socrates is mortal" is deductively valid. Anargumentissoundif it is validandall its premises are true. One approach defines deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning. Deductive logicstudies under what conditions an argument is valid. According to thesemanticapproach, an argument is valid if there is no possibleinterpretationof the argument whereby its premises are true and its conclusion is false. Thesyntacticapproach, by contrast, focuses onrules of inference, that is, schemas of drawing a conclusion from a set of premises based only on theirlogical form. There are various rules of inference, such asmodus ponensandmodus tollens. Invalid deductive arguments, which do not follow a rule of inference, are calledformal fallacies. Rules of inference are definitory rules and contrast with strategic rules, which specify what inferences one needs to draw in order to arrive at an intended conclusion. Deductive reasoning contrasts with non-deductive orampliativereasoning. For ampliative arguments, such asinductiveorabductive arguments, the premises offer weaker support to their conclusion: they indicate that it is most likely, but they do not guarantee its truth. They make up for this drawback with their ability to provide genuinely new information (that is, information not already found in the premises), unlike deductive arguments. Cognitive psychologyinvestigates the mental processes responsible for deductive reasoning. One of its topics concerns the factors determining whether people draw valid or invalid deductive inferences. One such factor is the form of the argument: for example, people draw valid inferences more successfully for arguments of the form modus ponens than of the form modus tollens. Another factor is the content of the arguments: people are more likely to believe that an argument is valid if the claim made in its conclusion is plausible. A general finding is that people tend to perform better for realistic and concrete cases than for abstract cases. Psychological theories of deductive reasoning aim to explain these findings by providing an account of the underlying psychological processes.Mental logic theorieshold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference.Mental model theories, on the other hand, claim that deductive reasoning involves models of possible states of the world without the medium of language or rules of inference. According todual-process theoriesof reasoning, there are two qualitatively different cognitive systems responsible for reasoning. The problem of deduction is relevant to various fields and issues.Epistemologytries to understand howjustificationis transferred from thebeliefin the premises to the belief in the conclusion in the process of deductive reasoning.Probability logicstudies how the probability of the premises of an inference affects the probability of its conclusion. The controversial thesis ofdeductivismdenies that there are other correct forms of inference besides deduction.Natural deductionis a type of proof system based on simple and self-evident rules of inference. In philosophy, the geometrical method is a way of philosophizing that starts from a small set of self-evident axioms and tries to build a comprehensive logical system using deductive reasoning. Deductive reasoning is the psychological process of drawing deductiveinferences. An inference is a set ofpremisestogether with a conclusion. This psychological process starts from the premises andreasonsto a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in avaliddeduction: the truth of the premises ensures the truth of the conclusion.[1][2][3][4]For example, in thesyllogisticargument "all frogs are amphibians; no cats are amphibians; therefore, no cats are frogs" the conclusion is true because its two premises are true. But even arguments with wrong premises can be deductively valid if they obey this principle, as in "all frogs are mammals; no cats are mammals; therefore, no cats are frogs". If the premises of a validargumentare true, then it is called asoundargument.[5] The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According toAlfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowablea priori.[6][7]It is necessary in the sense that the premises of valid deductive arguments necessitate the conclusion: it is impossible for the premises to be true and the conclusion to be false, independent of any other circumstances.[6][7]Logical consequence is formal in the sense that it depends only on the form or the syntax of the premises and the conclusion. This means that the validity of a particular argument does not depend on the specific contents of this argument. If it is valid, then any argument with the same logical form is also valid, no matter how different it is on the level of its contents.[6][7]Logical consequence is knowable a priori in the sense that noempiricalknowledge of the world is necessary to determine whether a deduction is valid. So it is not necessary to engage in any form of empirical investigation.[6][7]Some logicians define deduction in terms ofpossible worlds: A deductive inference is valid if and only if, there is no possible world in which its conclusion is false while its premises are true. This means that there are no counterexamples: the conclusion is true inallsuch cases, not just inmostcases.[1] It has been argued against this and similar definitions that they fail to distinguish between valid and invalid deductive reasoning, i.e. they leave it open whether there are invalid deductive inferences and how to define them.[8][9]Some authors define deductive reasoning in psychological terms in order to avoid this problem. According to Mark Vorobey, whether an argument is deductive depends on the psychological state of the person making the argument: "An argument is deductive if, and only if, the author of the argument believes that the truth of the premises necessitates (guarantees) the truth of the conclusion".[8]A similar formulation holds that the speakerclaimsorintendsthat the premises offer deductive support for their conclusion.[10][11]This is sometimes categorized as aspeaker-determineddefinition of deduction since it depends also on the speaker whether the argument in question is deductive or not. Forspeakerlessdefinitions, on the other hand, only the argument itself matters independent of the speaker.[9]One advantage of this type of formulation is that it makes it possible to distinguish between good or valid and bad or invalid deductive arguments: the argument is good if the author'sbeliefconcerning the relation between the premises and the conclusion is true, otherwise it is bad.[8]One consequence of this approach is that deductive arguments cannot be identified by the law of inference they use. For example, an argument of the formmodus ponensmay be non-deductive if the author's beliefs are sufficiently confused. That brings with it an important drawback of this definition: it is difficult to apply to concrete cases since the intentions of the author are usually not explicitly stated.[8] Deductive reasoning is studied inlogic,psychology, and thecognitive sciences.[3][1]Some theorists emphasize in their definition the difference between these fields. On this view, psychology studies deductive reasoning as an empirical mental process, i.e. what happens when humans engage in reasoning.[3][1]But the descriptive question of how actual reasoning happens is different from thenormativequestion of how itshouldhappen or what constitutescorrectdeductive reasoning, which is studied by logic.[3][12][6]This is sometimes expressed by stating that, strictly speaking, logic does not study deductive reasoning but the deductive relation between premises and a conclusion known aslogical consequence. But this distinction is not always precisely observed in the academic literature.[3]One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible.[1]So from the premise "the printer has ink" one may draw the unhelpful conclusion "the printer has ink and the printer has ink and the printer has ink", which has little relevance from a psychological point of view. Instead, actual reasoners usually try to remove redundant or irrelevant information and make the relevant information more explicit.[1]The psychological study of deductive reasoning is also concerned with how good people are at drawing deductive inferences and with the factors determining their performance.[3][5]Deductive inferences are found both innatural languageand informal logical systems, such aspropositional logic.[1][13] Deductive arguments differ from non-deductive arguments in that the truth of their premises ensures the truth of their conclusion.[14][15][6]There are two important conceptions of what this exactly means. They are referred to as thesyntacticand thesemanticapproach.[13][6][5]According to the syntactic approach, whether an argument is deductively valid depends only on its form, syntax, or structure. Two arguments have the same form if they use the same logical vocabulary in the same arrangement, even if their contents differ.[13][6][5]For example, the arguments "if it rains then the street will be wet; it rains; therefore, the street will be wet" and "if the meat is not cooled then it will spoil; the meat is not cooled; therefore, it will spoil" have the same logical form: they follow themodus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit.[5]There are various other valid logical forms orrules of inference, likemodus tollensor thedisjunction elimination. The syntactic approach then holds that an argument is deductively valid if and only if its conclusion can be deduced from its premises using a valid rule of inference.[13][6][5]One difficulty for the syntactic approach is that it is usually necessary to express the argument in aformal languagein order to assess whether it is valid. This often brings with it the difficulty of translating thenatural languageargument into a formal language, a process that comes with various problems of its own.[13]Another difficulty is due to the fact that the syntactic approach depends on the distinction between formal and non-formal features. While there is a wide agreement concerning the paradigmatic cases, there are also various controversial cases where it is not clear how this distinction is to be drawn.[16][12] The semantic approach suggests an alternative definition of deductive validity. It is based on the idea that the sentences constituting the premises and conclusions have to beinterpretedin order to determine whether the argument is valid.[13][6][5]This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object forsingular termsor to atruth-valuefor atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known asmodel theoryis often used to interpret these sentences.[13][6]Usually, many different interpretations are possible, such as whether a singular term refers to one object or to another. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation where its premises are true and its conclusion is false.[13][6][5]Some objections to the semantic approach are based on the claim that the semantics of a language cannot be expressed in the same language, i.e. that a richermetalanguageis necessary. This would imply that the semantic approach cannot provide a universal account of deduction for language as an all-encompassing medium.[13][12] Deductive reasoning usually happens by applyingrules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises.[17]This happens usually based only on thelogical formof the premises. A rule of inference is valid if, when applied to true premises, the conclusion cannot be false. A particular argument is valid if it follows a valid rule of inference. Deductive arguments that do not follow a valid rule of inference are calledformal fallacies: the truth of their premises does not ensure the truth of their conclusion.[18][14] In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system isclassical logicand the rules of inference listed here are all valid in classical logic. But so-calleddeviant logicsprovide a different account of which inferences are valid. For example, the rule of inference known asdouble negation elimination, i.e. that if a proposition isnot not truethen it is alsotrue, is accepted in classical logic but rejected inintuitionistic logic.[19][20] Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductiverule of inference. It applies to arguments that have as first premise aconditional statement(P→Q{\displaystyle P\rightarrow Q}) and as second premise the antecedent (P{\displaystyle P}) of the conditional statement. It obtains the consequent (Q{\displaystyle Q}) of the conditional statement as its conclusion. The argument form is listed below: In this form of deductive reasoning, the consequent (Q{\displaystyle Q}) obtains as the conclusion from the premises of a conditional statement (P→Q{\displaystyle P\rightarrow Q}) and its antecedent (P{\displaystyle P}). However, the antecedent (P{\displaystyle P}) cannot be similarly obtained as the conclusion from the premises of the conditional statement (P→Q{\displaystyle P\rightarrow Q}) and the consequent (Q{\displaystyle Q}). Such an argument commits thelogical fallacyofaffirming the consequent. The following is an example of an argument using modus ponens: Modus tollens (also known as "the law of contrapositive") is a deductive rule of inference. It validates an argument that has as premises a conditional statement (formula) and the negation of the consequent (¬Q{\displaystyle \lnot Q}) and as conclusion the negation of the antecedent (¬P{\displaystyle \lnot P}). In contrast tomodus ponens, reasoning with modus tollens goes in the opposite direction to that of the conditional. The general expression for modus tollens is the following: The following is an example of an argument using modus tollens: Ahypotheticalsyllogismis an inference that takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form: In there being a subformula in common between the two premises that does not occur in the consequence, this resembles syllogisms interm logic, although it differs in that this subformula is a proposition whereas in Aristotelian logic, this common element is a term and not a proposition. The following is an example of an argument using a hypothetical syllogism: Various formal fallacies have been described. They are invalid forms of deductive reasoning.[18][14]An additional aspect of them is that they appear to be valid on some occasions or on the first impression. They may thereby seduce people into accepting and committing them.[22]One type of formal fallacy isaffirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor".[23]This is similar to the valid rule of inference namedmodus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy isdenying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male".[24][25]This is similar to the valid rule of inference calledmodus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies includeaffirming a disjunct,denying a conjunct, and thefallacy of the undistributed middle. All of them have in common that the truth of their premises does not ensure the truth of their conclusion. But it may still happen by coincidence that both the premises and the conclusion of formal fallacies are true.[18][14] Rules of inferences are definitory rules: they determine whether an argument is deductively valid or not. But reasoners are usually not just interested in making any kind of valid argument. Instead, they often have a specific point or conclusion that they wish to prove or refute. So given a set of premises, they are faced with the problem of choosing the relevant rules of inference for their deduction to arrive at their intended conclusion.[13][26][27]This issue belongs to the field of strategic rules: the question of which inferences need to be drawn to support one's conclusion. The distinction between definitory and strategic rules is not exclusive to logic: it is also found in various games.[13][26][27]Inchess, for example, the definitory rules state thatbishopsmay only move diagonally while the strategic rules recommend that one should control the center and protect one'skingif one intends to win. In this sense, definitory rules determine whether one plays chess or something else whereas strategic rules determine whether one is a good or a bad chess player.[13][26]The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules.[13] Deductive arguments are evaluated in terms of theirvalidityandsoundness. An argument isvalidif it is impossible for itspremisesto be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be “valid” even if one or more of its premises are false. An argument issoundif it isvalidand the premises are true. It is possible to have a deductive argument that is logicallyvalidbut is notsound. Fallacious arguments often take that form. The following is an example of an argument that is “valid”, but not “sound”: The example's first premise is false – there are people who eat carrots who are not quarterbacks – but the conclusion would necessarily be true, if the premises were true. In other words, it is impossible for the premises to be true and the conclusion false. Therefore, the argument is “valid”, but not “sound”. False generalizations – such as "Everyone who eats carrots is a quarterback" – are often used to make unsound arguments. The fact that there are some people who eat carrots but are not quarterbacks proves the flaw of the argument. In this example, the first statement usescategorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known asterm logic– was developed byAristotle, but was superseded bypropositional (sentential) logicandpredicate logic.[citation needed] Deductive reasoning can be contrasted withinductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”, it is possible for the conclusion to be false (determined to be false with a counterexample or other means). Deductive reasoning is usually contrasted with non-deductive orampliativereasoning.[13][28][29]The hallmark of valid deductive inferences is that it is impossible for their premises to be true and their conclusion to be false. In this way, the premises provide the strongest possible support to their conclusion.[13][28][29]The premises of ampliative inferences also support their conclusion. But this support is weaker: they are not necessarily truth-preserving. So even for correct ampliative arguments, it is possible that their premises are true and their conclusion is false.[11]Two important forms of ampliative reasoning areinductiveandabductive reasoning.[30]Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning.[11]However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning.[30]In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individualobservationsthat all show a certain pattern. These observations are then used to form a conclusion either about a yet unobserved entity or about a general law.[31][32][33]For abductive inferences, the premises support the conclusion because the conclusion is the best explanation of why the premises are true.[30][34] The support ampliative arguments provide for their conclusion comes in degrees: some ampliative arguments are stronger than others.[11][35][30]This is often explained in terms ofprobability: the premises make it more likely that the conclusion is true.[13][28][29]Strong ampliative arguments make their conclusion very likely, but not absolutely certain. An example of ampliative reasoning is the inference from the premise "every raven in a random sample of 3200 ravens is black" to the conclusion "all ravens are black": the extensive random sample makes the conclusion very likely, but it does not exclude that there are rare exceptions.[35]In this sense, ampliative reasoning is defeasible: it may become necessary to retract an earlier conclusion upon receiving new related information.[12][30]Ampliative reasoning is very common in everyday discourse and thesciences.[13][36] An important drawback of deductive reasoning is that it does not lead to genuinely new information.[5]This means that the conclusion only repeats information already found in the premises. Ampliative reasoning, on the other hand, goes beyond the premises by arriving at genuinely new information.[13][28][29]One difficulty for this characterization is that it makes deductive reasoning appear useless: if deduction is uninformative, it is not clear why people would engage in it and study it.[13][37]It has been suggested that this problem can be solved by distinguishing between surface and depth information. On this view, deductive reasoning is uninformative on the depth level, in contrast to ampliative reasoning. But it may still be valuable on the surface level by presenting the information in the premises in a new and sometimes surprising way.[13][5] A popular misconception of the relation between deduction and induction identifies their difference on the level of particular and general claims.[2][9][38]On this view, deductive inferences start from general premises and draw particular conclusions, while inductive inferences start from particular premises and draw general conclusions. This idea is often motivated by seeing deduction and induction as two inverse processes that complement each other: deduction istop-downwhile induction isbottom-up. But this is a misconception that does not reflect how valid deduction is defined in the field oflogic: a deduction is valid if it is impossible for its premises to be true while its conclusion is false, independent of whether the premises or the conclusion are particular or general.[2][9][1][5][3]Because of this, some deductive inferences have a general conclusion and some also have particular premises.[2] Cognitive psychologystudies the psychological processes responsible for deductive reasoning.[3][5]It is concerned, among other things, with how good people are at drawing valid deductive inferences. This includes the study of the factors affecting their performance, their tendency to commitfallacies, and the underlyingbiasesinvolved.[3][5]A notable finding in this field is that the type of deductive inference has a significant impact on whether the correct conclusion is drawn.[3][5][39][40]In a meta-analysis of 65 studies, for example, 97% of the subjects evaluatedmodus ponensinferences correctly, while the success rate formodus tollenswas only 72%. On the other hand, even some fallacies likeaffirming the consequentordenying the antecedentwere regarded as valid arguments by the majority of the subjects.[3]An important factor for these mistakes is whether the conclusion seems initially plausible: the more believable the conclusion is, the higher the chance that a subject will mistake a fallacy for a valid argument.[3][5] An important bias is thematching bias, which is often illustrated using theWason selection task.[5][3][41][42]In an often-cited experiment byPeter Wason, 4 cards are presented to the participant. In one case, the visible sides show the symbols D, K, 3, and 7 on the different cards. The participant is told that every card has a letter on one side and a number on the other side, and that "[e]very card which has a D on one side has a 3 on the other side". Their task is to identify which cards need to be turned around in order to confirm or refute this conditional claim. The correct answer, only given by about 10%, is the cards D and 7. Many select card 3 instead, even though the conditional claim does not involve any requirements on what symbols can be found on the opposite side of card 3.[3][5]But this result can be drastically changed if different symbols are used: the visible sides show "drinking a beer", "drinking a coke", "16 years of age", and "22 years of age" and the participants are asked to evaluate the claim "[i]f a person is drinking beer, then the person must be over 19 years of age". In this case, 74% of the participants identified correctly that the cards "drinking a beer" and "16 years of age" have to be turned around.[3][5]These findings suggest that the deductive reasoning ability is heavily influenced by the content of the involved claims and not just by the abstract logical form of the task: the more realistic and concrete the cases are, the better the subjects tend to perform.[3][5] Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negativematerial conditional,[5][43][44]as in "If the card does not have an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card has an A on the left". The increased tendency to misjudge the validity of this type of argument is not present for positive material conditionals, as in "If the card has an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card does not have an A on the left".[5] Various psychological theories of deductive reasoning have been proposed. These theories aim to explain how deductive reasoning works in relation to the underlying psychological processes responsible. They are often used to explain the empirical findings, such as why human reasoners are more susceptible to some types of fallacies than to others.[3][1][45] An important distinction is betweenmental logic theories, sometimes also referred to asrule theories, andmental model theories.Mental logic theoriessee deductive reasoning as alanguage-like process that happens through the manipulation of representations.[3][1][46][45]This is done by applying syntactic rules of inference in a way very similar to how systems ofnatural deductiontransform their premises to arrive at a conclusion.[45]On this view, some deductions are simpler than others since they involve fewer inferential steps.[3]This idea can be used, for example, to explain why humans have more difficulties with some deductions, like themodus tollens, than with others, like themodus ponens: because the more error-prone forms do not have a native rule of inference but need to be calculated by combining several inferential steps with other rules of inference. In such cases, the additional cognitive labor makes the inferences more open to error.[3] Mental model theories, on the other hand, hold that deductive reasoning involves models ormental representationsof possible states of the world without the medium of language or rules of inference.[3][1][45]In order to assess whether a deductive inference is valid, the reasoner mentally constructs models that are compatible with the premises of the inference. The conclusion is then tested by looking at these models and trying to find a counterexample in which the conclusion is false. The inference is valid if no such counterexample can be found.[3][1][45]In order to reduce cognitive labor, only such models are represented in which the premises are true. Because of this, the evaluation of some forms of inference only requires the construction of very few models while for others, many different models are necessary. In the latter case, the additional cognitive labor required makes deductive reasoning more error-prone, thereby explaining the increased rate of error observed.[3][1]This theory can also explain why some errors depend on the content rather than the form of the argument. For example, when the conclusion of an argument is very plausible, the subjects may lack the motivation to search for counterexamples among the constructed models.[3] Both mental logic theories and mental model theories assume that there is one general-purpose reasoning mechanism that applies to all forms of deductive reasoning.[3][46][47]But there are also alternative accounts that posit various different special-purpose reasoning mechanisms for different contents and contexts. In this sense, it has been claimed that humans possess a special mechanism for permissions and obligations, specifically for detecting cheating in social exchanges. This can be used to explain why humans are often more successful in drawing valid inferences if the contents involve human behavior in relation to social norms.[3]Another example is the so-calleddual-process theory.[5][3]This theory posits that there are two distinct cognitive systems responsible for reasoning. Their interrelation can be used to explain commonly observed biases in deductive reasoning. System 1 is the older system in terms of evolution. It is based on associative learning and happens fast and automatically without demanding many cognitive resources.[5][3]System 2, on the other hand, is of more recent evolutionary origin. It is slow and cognitively demanding, but also more flexible and under deliberate control.[5][3]The dual-process theory posits that system 1 is the default system guiding most of our everyday reasoning in a pragmatic way. But for particularly difficult problems on the logical level, system 2 is employed. System 2 is mostly responsible for deductive reasoning.[5][3] Theabilityof deductive reasoning is an important aspect ofintelligenceand manytests of intelligenceinclude problems that call for deductive inferences.[1]Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences.[5]But the subject of deductive reasoning is also pertinent to thecomputer sciences, for example, in the creation ofartificial intelligence.[1] Deductive reasoning plays an important role inepistemology. Epistemology is concerned with the question ofjustification, i.e. to point out which beliefs are justified and why.[48][49]Deductive inferences are able to transfer the justification of the premises onto the conclusion.[3]So while logic is interested in the truth-preserving nature of deduction, epistemology is interested in the justification-preserving nature of deduction. There are different theories trying to explain why deductive reasoning is justification-preserving.[3]According toreliabilism, this is the case because deductions are truth-preserving: they are reliable processes that ensure a true conclusion given the premises are true.[3][50][51]Some theorists hold that the thinker has to have explicit awareness of the truth-preserving nature of the inference for the justification to be transferred from the premises to the conclusion. One consequence of such a view is that, for young children, this deductive transference does not take place since they lack this specific awareness.[3] Probability logicis interested in how the probability of the premises of an argument affects the probability of its conclusion. It differs from classical logic, which assumes that propositions are either true or false but does not take into consideration the probability or certainty that a proposition is true or false.[52][53] Aristotle, aGreek philosopher, started documenting deductive reasoning in the 4th century BC.[54]René Descartes, in his bookDiscourse on Method, refined the idea for theScientific Revolution. Developing four rules to follow for proving an idea deductively, Descartes laid the foundation for the deductive portion of thescientific method. Descartes' background in geometry and mathematics influenced his ideas on the truth and reasoning, causing him to develop a system of general reasoning now used for most mathematical reasoning. Similar to postulates, Descartes believed that ideas could be self-evident and that reasoning alone must prove that observations are reliable. These ideas also lay the foundations for the ideas ofrationalism.[55] Deductivism is a philosophical position that gives primacy to deductive reasoning or arguments over their non-deductive counterparts.[56][57]It is often understood as the evaluative claim that only deductive inferences aregoodorcorrectinferences. This theory would have wide-reaching consequences for various fields since it implies that the rules of deduction are "the only acceptable standard ofevidence".[56]This way, the rationality or correctness of the different forms of inductive reasoning is denied.[57][58]Some forms of deductivism express this in terms of degrees of reasonableness or probability. Inductive inferences are usually seen as providing a certain degree of support for their conclusion: they make it more likely that their conclusion is true. Deductivism states that such inferences are not rational: the premises either ensure their conclusion, as in deductive reasoning, or they do not provide any support at all.[59] One motivation for deductivism is theproblem of inductionintroduced byDavid Hume. It consists in the challenge of explaining how or whether inductive inferences based on past experiences support conclusions about future events.[57][60][59]For example, a chicken comes to expect, based on all its past experiences, that the person entering its coop is going to feed it, until one day the person "at last wrings its neck instead".[61]According toKarl Popper's falsificationism, deductive reasoning alone is sufficient. This is due to its truth-preserving nature: a theory can be falsified if one of its deductive consequences is false.[62][63]So while inductive reasoning does not offer positive evidence for a theory, the theory still remains a viable competitor until falsified byempirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case.[57]Hypothetico-deductivismis a closely related scientific method, according to which science progresses by formulating hypotheses and then aims to falsify them by trying to make observations that run counter to their deductive consequences.[64][65] The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference.[66][67]The first systems of natural deduction were developed byGerhard GentzenandStanislaw Jaskowskiin the 1930s. The core motivation was to give a simple presentation of deductive reasoning that closely mirrors how reasoning actually takes place.[68]In this sense, natural deduction stands in contrast to other less intuitive proof systems, such asHilbert-style deductive systems, which employ axiom schemes to expresslogical truths.[66]Natural deduction, on the other hand, avoids axioms schemes by including many different rules of inference that can be used to formulate proofs. These rules of inference express howlogical constantsbehave. They are often divided intointroduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of theproof.[66][67]For example, the introduction rule for the logical constant"∧{\displaystyle \land }"(and) is"A,B(A∧B){\displaystyle {\frac {A,B}{(A\land B)}}}". It expresses that, given the premises"A{\displaystyle A}"and"B{\displaystyle B}"individually, one may draw the conclusion"A∧B{\displaystyle A\land B}"and thereby include it in one's proof. This way, the symbol"∧{\displaystyle \land }"is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule"(A∧B)A{\displaystyle {\frac {(A\land B)}{A}}}", which states that one may deduce the sentence"A{\displaystyle A}"from the premise"(A∧B){\displaystyle (A\land B)}". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator"¬{\displaystyle \lnot }", thepropositional connectives"∨{\displaystyle \lor }"and"→{\displaystyle \rightarrow }", and thequantifiers"∃{\displaystyle \exists }"and"∀{\displaystyle \forall }".[66][67] The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction.[66][67]But there is no general agreement on how natural deduction is to be defined. Some theorists hold that all proof systems with this feature are forms of natural deduction. This would include various forms ofsequent calculi[a]ortableau calculi. But other theorists use the term in a more narrow sense, for example, to refer to the proof systems developed by Gentzen and Jaskowski. Because of its simplicity, natural deduction is often used for teaching logic to students.[66] The geometrical method is a method ofphilosophybased on deductive reasoning. It starts from a small set ofself-evidentaxioms and tries to build a comprehensive logical system based only on deductive inferences from these firstaxioms.[69]It was initially formulated byBaruch Spinozaand came to prominence in variousrationalistphilosophical systems in the modern era.[70]It gets its name from the forms ofmathematical demonstrationfound in traditionalgeometry, which are usually based on axioms,definitions, and inferredtheorems.[71][72]An important motivation of the geometrical method is to repudiatephilosophical skepticismby grounding one's philosophical system on absolutely certain axioms. Deductive reasoning is central to this endeavor because of its necessarily truth-preserving nature. This way, the certainty initially invested only in the axioms is transferred to all parts of the philosophical system.[69] One recurrent criticism of philosophical systems build using the geometrical method is that their initial axioms are not as self-evident or certain as their defenders proclaim.[69]This problem lies beyond the deductive reasoning itself, which only ensures that the conclusion is true if the premises are true, but not that the premises themselves are true. For example, Spinoza's philosophical system has been criticized this way based on objections raised against thecausalaxiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause".[73]A different criticism targets not the premises but the reasoning itself, which may at times implicitly assume premises that are themselves not self-evident.[69]
https://en.wikipedia.org/wiki/Deductive_reasoning
Interim Standard 95(IS-95) was the first digital cellular technology that usedcode-division multiple access(CDMA). It was developed byQualcommand later adopted as a standard by theTelecommunications Industry Associationin TIA/EIA/IS-95 release published in 1995. The proprietary name for IS-95 iscdmaOne. It is a2Gmobile telecommunications standard that uses CDMA, amultiple accessscheme fordigital radio, to send voice, data and signaling data (such as a dialed telephone number) between mobiletelephonesandcell sites. CDMA transmits streams ofbits(PN codes). CDMA permits several radios to share the same frequencies. Unliketime-division multiple access(TDMA), a competing system used in 2GGSM, all radios can be active all the time, because network capacity does not directly limit the number of active radios. Since larger numbers of phones can be served by smaller numbers of cell-sites, CDMA-based standards have a significant economic advantage over TDMA-based standards,[citation needed]or the oldest cellular standards that usedfrequency-division multiplexing. In North America, the technology competed withDigital AMPS(IS-136), a TDMA-based standard, as well as with the TDMA-based GSM. It was supplanted byIS-2000(CDMA2000), a later CDMA-based standard. cdmaOne's technical history is reflective of both its birth as a Qualcomm internal project, and the world of then-unproven competing digital cellular standards under which it was developed. Theterm IS-95generically applies to the earlier set of protocol revisions, namely P_REV's one through five. P_REV=1 was developed under anANSIstandards process with documentation referenceJ-STD-008. J-STD-008, published in 1995, was only defined for the then-new North American PCS band (Band Class 1, 1900 MHz). The termIS-95properly refers to P_REV=1, developed under theTelecommunications Industry Association(TIA) standards process, for the North American cellular band (Band Class 0, 800 MHz) under roughly the same time frame. IS-95 offered interoperation (including handoff) with the analog cellular network. For digital operation, IS-95 and J-STD-008 have most technical details in common. The immature style and structure of both documents are highly reflective of the "standardizing" of Qualcomm's internal project. P_REV=2 is termedInterim Standard 95A (IS-95A). IS-95A was developed for Band Class 0 only, as in incremental improvement over IS-95 in the TIA standards process. P_REV=3 is termedTechnical Services Bulletin 74 (TSB-74). TSB-74 was the next incremental improvement over IS-95A in the TIA standards process. P_REV=4 is termedInterim Standard 95B (IS-95B) Phase I, and P_REV=5 is termedInterim Standard 95B (IS-95B) Phase II. The IS-95B standards track provided for a merging of the TIA and ANSI standards tracks under the TIA, and was the first document that provided for interoperation of IS-95 mobile handsets in both band classes (dual-band operation). P_REV=4 was by far the most popular variant of IS-95, with P_REV=5 only seeing minimal uptake in South Korea. P_REV=6 and beyond fall under theCDMA2000umbrella. Besides technical improvements, the IS-2000 documents are much more mature in terms of layout and content. They also provide backwards-compatibility to IS-95. The IS-95 standards describe anair interface,[1]a set of protocols used between mobile units and the network. IS-95 is widely described as a three-layer stack, where L1 corresponds to the physical (PHY) layer, L2 refers to theMedia Access Control(MAC) and Link-Access Control (LAC) sublayers, and L3 to the call-processing state machine. IS-95 defines the transmission of signals in both theforward(network-to-mobile) andreverse(mobile-to-network) directions. In the forward direction, radio signals are transmitted by base stations (BTS's). Every BTS is synchronized with aGPSreceiver so transmissions are tightly controlled in time. All forward transmissions areQPSKwith a chip rate of 1,228,800 per second. Each signal is spread with aWalsh codeof length 64 and apseudo-random noisecode (PN code) of length 215, yielding a PN roll-over period of803{\displaystyle {\frac {80}{3}}}ms. For the reverse direction, radio signals are transmitted by the mobile. Reverse link transmissions areOQPSKin order to operate in the optimal range of the mobile's power amplifier. Like the forward link, the chip rate is 1,228,800 per second and signals are spread withWalsh codesand thepseudo-random noisecode, which is also known as a Short Code. Every BTS dedicates a significant amount of output power to apilot channel, which is an unmodulated PN sequence (in other words, spread with Walsh code 0). Each BTS sector in the network is assigned a PN offset in steps of 64 chips. There is no data carried on the forward pilot. With its strongautocorrelationfunction, the forward pilot allows mobiles to determine system timing and distinguish different BTSs forhandoff. When a mobile is "searching", it is attempting to find pilot signals on the network by tuning to particular radio frequencies and performing a cross-correlation across all possible PN phases. A strong correlation peak result indicates the proximity of a BTS. Other forward channels, selected by their Walsh code, carry data from the network to the mobiles. Data consists of network signaling and user traffic. Generally, data to be transmitted is divided into frames of bits. A frame of bits is passed through a convolutional encoder, adding forward error correction redundancy, generating a frame of symbols. These symbols are then spread with the Walsh and PN sequences and transmitted. BTSs transmit async channelspread with Walsh code 32. The sync channel frame is803{\displaystyle {\frac {80}{3}}}ms long, and its frame boundary is aligned to the pilot. The sync channel continually transmits a single message, theSync Channel Message, which has a length and content dependent on the P_REV. The message is transmitted 32 bits per frame, encoded to 128 symbols, yielding a rate of 1200 bit/s. The Sync Channel Message contains information about the network, including the PN offset used by the BTS sector. Once a mobile has found a strong pilot channel, it listens to the sync channel and decodes a Sync Channel Message to develop a highly accurate synchronization to system time. At this point the mobile knows whether it is roaming, and that it is "in service". BTSs transmit at least one, and as many as seven,paging channels starting with Walsh code 1. The paging channel frame time is 20 ms, and is time aligned to the IS-95 system (i.e. GPS) 2-second roll-over. There are two possible rates used on the paging channel: 4800 bit/s or 9600 bit/s. Both rates are encoded to 19200 symbols per second. The paging channel contains signaling messages transmitted from the network to all idle mobiles. A set of messages communicate detailed network overhead to the mobiles, circulating this information while the paging channel is free. The paging channel also carries higher-priority messages dedicated to setting up calls to and from the mobiles. When a mobile is idle, it is mostly listening to a paging channel. Once a mobile has parsed all the network overhead information, itregisterswith the network, then optionally entersslotted-mode. Both of these processes are described in more detail below. The Walsh space not dedicated to broadcast channels on the BTS sector is available fortraffic channels. These channels carry the individual voice and data calls supported by IS-95. Like the paging channel, traffic channels have a frame time of 20ms. Since voice and user data are intermittent, the traffic channels support variable-rate operation. Every 20 ms frame may be transmitted at a different rate, as determined by the service in use (voice or data). P_REV=1 and P_REV=2 supportedrate set 1, providing a rate of 1200, 2400, 4800, or 9600 bit/s. P_REV=3 and beyond also providedrate set 2, yielding rates of 1800, 3600, 7200, or 14400 bit/s. For voice calls, the traffic channel carries frames ofvocoderdata. A number of different vocoders are defined under IS-95, the earlier of which were limited to rate set 1, and were responsible for some user complaints of poor voice quality. More sophisticated vocoders, taking advantage of modern DSPs and rate set 2, remedied the voice quality situation and are still in wide use in 2005. The mobile receiving a variable-rate traffic frame does not know the rate at which the frame was transmitted. Typically, the frame is decoded at each possible rate, and using the quality metrics of theViterbi decoder, the correct result is chosen. Traffic channels may also carry circuit-switch data calls in IS-95. The variable-rate traffic frames are generated using the IS-95Radio Link Protocol (RLP). RLP provides a mechanism to improve the performance of the wireless link for data. Where voice calls might tolerate the dropping of occasional 20 ms frames, a data call would have unacceptable performance without RLP. Under IS-95B P_REV=5, it was possible for a user to use up to seven supplemental "code" (traffic) channels simultaneously to increase the throughput of a data call. Very few mobiles or networks ever provided this feature, which could in theory offer 115200 bit/s to a user. After convolution coding and repetition, symbols are sent to a 20 ms block interleaver, which is a 24 by 16 array. IS-95 and its use of CDMA techniques, like any other communications system, have their throughput limited according toShannon's theorem. Accordingly, capacity improves with SNR and bandwidth. IS-95 has a fixed bandwidth, but fares well in the digital world because it takes active steps to improve SNR. With CDMA, signals that are not correlated with the channel of interest (such as other PN offsets from adjacent cellular base stations) appear as noise, and signals carried on other Walsh codes (that are properly time aligned) are essentially removed in the de-spreading process. The variable-rate nature of traffic channels provide lower-rate frames to be transmitted at lower power causing less noise for other signals still to be correctly received. These factors provide an inherently lower noise level than other cellular technologies allowing the IS-95 network to squeeze more users into the same radio spectrum. Active (slow) power control is also used on the forward traffic channels, where during a call, the mobile sends signaling messages to the network indicating the quality of the signal. The network will control the transmitted power of the traffic channel to keep the signal quality just good enough, thereby keeping the noise level seen by all other users to a minimum. The receiver also uses the techniques of therake receiverto improve SNR as well as performsoft handoff. Once a call is established, a mobile is restricted to using the traffic channel. A frame format is defined in the MAC for the traffic channel that allows the regular voice (vocoder) or data (RLP) bits to be multiplexed with signaling message fragments. The signaling message fragments are pieced together in the LAC, where complete signaling messages are passed on to Layer 3. cdmaOne was used in the following areas:
https://en.wikipedia.org/wiki/CdmaOne
In themathematicaldiscipline ofmatrix theory, aJordan matrix, named afterCamille Jordan, is ablock diagonal matrixover aringR(whoseidentitiesare thezero0 andone1), where each block along the diagonal, called a Jordan block, has the following form:[λ10⋯00λ1⋯0⋮⋮⋮⋱⋮000λ10000λ].{\displaystyle {\begin{bmatrix}\lambda &1&0&\cdots &0\\0&\lambda &1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\lambda &1\\0&0&0&0&\lambda \end{bmatrix}}.} EveryJordan blockis specified by its dimensionnand itseigenvalueλ∈R{\displaystyle \lambda \in R}, and is denoted asJλ,n. It is ann×n{\displaystyle n\times n}matrix of zeroes everywhere except for the diagonal, which is filled withλ{\displaystyle \lambda }and for thesuperdiagonal, which is composed of ones. Any block diagonal matrix whose blocks are Jordan blocks is called aJordan matrix. This(n1+ ⋯ +nr) × (n1+ ⋯ +nr)square matrix, consisting ofrdiagonal blocks, can be compactly indicated asJλ1,n1⊕⋯⊕Jλr,nr{\displaystyle J_{\lambda _{1},n_{1}}\oplus \cdots \oplus J_{\lambda _{r},n_{r}}}ordiag(Jλ1,n1,…,Jλr,nr){\displaystyle \mathrm {diag} \left(J_{\lambda _{1},n_{1}},\ldots ,J_{\lambda _{r},n_{r}}\right)}, where thei-th Jordan block isJλi,ni. For example, the matrixJ=[010000000000100000000000000000000i1000000000i0000000000i1000000000i000000000071000000000710000000007]{\displaystyle J=\left[{\begin{array}{ccc|cc|cc|ccc}0&1&0&0&0&0&0&0&0&0\\0&0&1&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0\\\hline 0&0&0&i&1&0&0&0&0&0\\0&0&0&0&i&0&0&0&0&0\\\hline 0&0&0&0&0&i&1&0&0&0\\0&0&0&0&0&0&i&0&0&0\\\hline 0&0&0&0&0&0&0&7&1&0\\0&0&0&0&0&0&0&0&7&1\\0&0&0&0&0&0&0&0&0&7\end{array}}\right]}is a10 × 10Jordan matrix with a3 × 3block witheigenvalue0, two2 × 2blocks with eigenvalue theimaginary uniti, and a3 × 3block with eigenvalue 7. Its Jordan-block structure is written as eitherJ0,3⊕Ji,2⊕Ji,2⊕J7,3{\displaystyle J_{0,3}\oplus J_{i,2}\oplus J_{i,2}\oplus J_{7,3}}ordiag(J0,3,Ji,2,Ji,2,J7,3). Anyn×nsquare matrixAwhose elements are in analgebraically closed fieldKissimilarto a Jordan matrixJ, also inMn(K){\displaystyle \mathbb {M} _{n}(K)}, which is unique up to a permutation of its diagonal blocks themselves.Jis called theJordan normal formofAand corresponds to a generalization of the diagonalization procedure.[1][2][3]Adiagonalizable matrixis similar, in fact, to a special case of Jordan matrix: the matrix whose blocks are all1 × 1.[4][5][6] More generally, given a Jordan matrixJ=Jλ1,m1⊕Jλ2,m2⊕⋯⊕JλN,mN{\displaystyle J=J_{\lambda _{1},m_{1}}\oplus J_{\lambda _{2},m_{2}}\oplus \cdots \oplus J_{\lambda _{N},m_{N}}}, that is, whosekth diagonal block,1≤k≤N{\displaystyle 1\leq k\leq N}, is the Jordan blockJλk,mkand whose diagonal elementsλk{\displaystyle \lambda _{k}}may not all be distinct, thegeometric multiplicityofλ∈K{\displaystyle \lambda \in K}for the matrixJ, indicated asgmulJ⁡λ{\displaystyle \operatorname {gmul} _{J}\lambda }, corresponds to the number of Jordan blocks whose eigenvalue isλ. Whereas theindexof an eigenvalueλ{\displaystyle \lambda }forJ, indicated asidxJ⁡λ{\displaystyle \operatorname {idx} _{J}\lambda }, is defined as the dimension of the largest Jordan block associated to that eigenvalue. The same goes for all the matricesAsimilar toJ, soidxA⁡λ{\displaystyle \operatorname {idx} _{A}\lambda }can be defined accordingly with respect to theJordan normal formofAfor any of its eigenvaluesλ∈spec⁡A{\displaystyle \lambda \in \operatorname {spec} A}. In this case one can check that the index ofλ{\displaystyle \lambda }forAis equal to its multiplicity as arootof theminimal polynomialofA(whereas, by definition, itsalgebraic multiplicityforA,mulA⁡λ{\displaystyle \operatorname {mul} _{A}\lambda }, is its multiplicity as a root of thecharacteristic polynomialofA; that is,det(A−xI)∈K[x]{\displaystyle \det(A-xI)\in K[x]}). An equivalent necessary and sufficient condition forAto be diagonalizable inKis that all of its eigenvalues have index equal to1; that is, its minimal polynomial has only simple roots. Note that knowing a matrix's spectrum with all of its algebraic/geometric multiplicities and indexes does not always allow for the computation of itsJordan normal form(this may be a sufficient condition only for spectrally simple, usually low-dimensional matrices). Indeed, determining the Jordan normal form is generally a computationally challenging task. From thevector spacepoint of view, the Jordan normal form is equivalent to finding an orthogonal decomposition (that is, viadirect sumsof eigenspaces represented by Jordan blocks) of the domain which the associatedgeneralized eigenvectorsmake a basis for. LetA∈Mn(C){\displaystyle A\in \mathbb {M} _{n}(\mathbb {C} )}(that is, an×ncomplex matrix) andC∈GLn(C){\displaystyle C\in \mathrm {GL} _{n}(\mathbb {C} )}be thechange of basismatrix to theJordan normal formofA; that is,A=C−1JC. Now letf(z)be aholomorphic functionon an open setΩ{\displaystyle \Omega }such thatspecA⊂Ω⊆C{\displaystyle \mathrm {spec} A\subset \Omega \subseteq \mathbb {C} }; that is, the spectrum of the matrix is contained inside thedomain of holomorphyoff. Letf(z)=∑h=0∞ah(z−z0)h{\displaystyle f(z)=\sum _{h=0}^{\infty }a_{h}(z-z_{0})^{h}}be thepower seriesexpansion offaroundz0∈Ω∖spec⁡A{\displaystyle z_{0}\in \Omega \setminus \operatorname {spec} A}, which will be hereinafter supposed to be0for simplicity's sake. The matrixf(A)is then defined via the followingformal power seriesf(A)=∑h=0∞ahAh{\displaystyle f(A)=\sum _{h=0}^{\infty }a_{h}A^{h}}and isabsolutely convergentwith respect to theEuclidean normofMn(C){\displaystyle \mathbb {M} _{n}(\mathbb {C} )}. To put it another way,f(A)converges absolutely for every square matrix whosespectral radiusis less than theradius of convergenceoffaround0and isuniformly convergenton any compact subsets ofMn(C){\displaystyle \mathbb {M} _{n}(\mathbb {C} )}satisfying this property in thematrix Lie grouptopology. TheJordan normal formallows the computation of functions of matrices without explicitly computing aninfinite series, which is one of the main achievements of Jordan matrices. Using the facts that thekth power (k∈N0{\displaystyle k\in \mathbb {N} _{0}}) of a diagonalblock matrixis the diagonal block matrix whose blocks are thekth powers of the respective blocks; that is,(A1⊕A2⊕A3⊕⋯)k=A1k⊕A2k⊕A3k⊕⋯{\displaystyle \left(A_{1}\oplus A_{2}\oplus A_{3}\oplus \cdots \right)^{k}=A_{1}^{k}\oplus A_{2}^{k}\oplus A_{3}^{k}\oplus \cdots },and thatAk=C−1JkC, the above matrix power series becomes f(A)=C−1f(J)C=C−1(⨁k=1Nf(Jλk,mk))C{\displaystyle f(A)=C^{-1}f(J)C=C^{-1}\left(\bigoplus _{k=1}^{N}f\left(J_{\lambda _{k},m_{k}}\right)\right)C} where the last series need not be computed explicitly via power series of every Jordan block. In fact, ifλ∈Ω{\displaystyle \lambda \in \Omega }, anyholomorphic functionof a Jordan blockf(Jλ,n)=f(λI+Z){\displaystyle f(J_{\lambda ,n})=f(\lambda I+Z)}has a finite power series aroundλI{\displaystyle \lambda I}becauseZn=0{\displaystyle Z^{n}=0}. Here,Z{\displaystyle Z}is the nilpotent part ofJ{\displaystyle J}andZk{\displaystyle Z^{k}}has all 0's except 1's along thekth{\displaystyle k^{\text{th}}}superdiagonal. Thus it is the following uppertriangular matrix:f(Jλ,n)=∑k=0n−1f(k)(λ)Zkk!=[f(λ)f′(λ)f′′(λ)2⋯f(n−2)(λ)(n−2)!f(n−1)(λ)(n−1)!0f(λ)f′(λ)⋯f(n−3)(λ)(n−3)!f(n−2)(λ)(n−2)!00f(λ)⋯f(n−4)(λ)(n−4)!f(n−3)(λ)(n−3)!⋮⋮⋮⋱⋮⋮000⋯f(λ)f′(λ)000⋯0f(λ)].{\displaystyle f(J_{\lambda ,n})=\sum _{k=0}^{n-1}{\frac {f^{(k)}(\lambda )Z^{k}}{k!}}={\begin{bmatrix}f(\lambda )&f^{\prime }(\lambda )&{\frac {f^{\prime \prime }(\lambda )}{2}}&\cdots &{\frac {f^{(n-2)}(\lambda )}{(n-2)!}}&{\frac {f^{(n-1)}(\lambda )}{(n-1)!}}\\0&f(\lambda )&f^{\prime }(\lambda )&\cdots &{\frac {f^{(n-3)}(\lambda )}{(n-3)!}}&{\frac {f^{(n-2)}(\lambda )}{(n-2)!}}\\0&0&f(\lambda )&\cdots &{\frac {f^{(n-4)}(\lambda )}{(n-4)!}}&{\frac {f^{(n-3)}(\lambda )}{(n-3)!}}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &f(\lambda )&f^{\prime }(\lambda )\\0&0&0&\cdots &0&f(\lambda )\\\end{bmatrix}}.} As a consequence of this, the computation of any function of a matrix is straightforward whenever its Jordan normal form and its change-of-basis matrix are known. For example, usingf(z)=1/z{\displaystyle f(z)=1/z}, the inverse ofJλ,n{\displaystyle J_{\lambda ,n}}is:Jλ,n−1=∑k=0n−1(−Z)kλk+1=[λ−1−λ−2λ−3⋯−(−λ)1−n−(−λ)−n0λ−1−λ−2⋯−(−λ)2−n−(−λ)1−n00λ−1⋯−(−λ)3−n−(−λ)2−n⋮⋮⋮⋱⋮⋮000⋯λ−1−λ−2000⋯0λ−1].{\displaystyle J_{\lambda ,n}^{-1}=\sum _{k=0}^{n-1}{\frac {(-Z)^{k}}{\lambda ^{k+1}}}={\begin{bmatrix}\lambda ^{-1}&-\lambda ^{-2}&\,\,\,\lambda ^{-3}&\cdots &-(-\lambda )^{1-n}&\,-(-\lambda )^{-n}\\0&\;\;\;\lambda ^{-1}&-\lambda ^{-2}&\cdots &-(-\lambda )^{2-n}&-(-\lambda )^{1-n}\\0&0&\,\,\,\lambda ^{-1}&\cdots &-(-\lambda )^{3-n}&-(-\lambda )^{2-n}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &\lambda ^{-1}&-\lambda ^{-2}\\0&0&0&\cdots &0&\lambda ^{-1}\\\end{bmatrix}}.} Also,specf(A) =f(specA); that is, every eigenvalueλ∈specA{\displaystyle \lambda \in \mathrm {spec} A}corresponds to the eigenvaluef(λ)∈spec⁡f(A){\displaystyle f(\lambda )\in \operatorname {spec} f(A)}, but it has, in general, differentalgebraic multiplicity, geometric multiplicity and index. However, the algebraic multiplicity may be computed as follows:mulf(A)f(λ)=∑μ∈specA∩f−1(f(λ))mulAμ.{\displaystyle {\text{mul}}_{f(A)}f(\lambda )=\sum _{\mu \in {\text{spec}}A\cap f^{-1}(f(\lambda ))}~{\text{mul}}_{A}\mu .} The functionf(T)of alinear transformationTbetween vector spaces can be defined in a similar way according to theholomorphic functional calculus, whereBanach spaceandRiemann surfacetheories play a fundamental role. In the case of finite-dimensional spaces, both theories perfectly match. Now suppose a (complex)dynamical systemis simply defined by the equationz˙(t)=A(c)z(t),z(0)=z0∈Cn,{\displaystyle {\begin{aligned}{\dot {\mathbf {z} }}(t)&=A(\mathbf {c} )\mathbf {z} (t),\\\mathbf {z} (0)&=\mathbf {z} _{0}\in \mathbb {C} ^{n},\end{aligned}}} wherez:R+→R{\displaystyle \mathbf {z} :\mathbb {R} _{+}\to {\mathcal {R}}}is the (n-dimensional) curve parametrization of an orbit on theRiemann surfaceR{\displaystyle {\mathcal {R}}}of the dynamical system, whereasA(c)is ann×ncomplex matrix whose elements are complex functions of ad-dimensional parameterc∈Cd{\displaystyle \mathbf {c} \in \mathbb {C} ^{d}}. Even ifA∈Mn(C0(Cd)){\displaystyle A\in \mathbb {M} _{n}\left(\mathrm {C} ^{0}\left(\mathbb {C} ^{d}\right)\right)}(that is,Acontinuously depends on the parameterc) theJordan normal formof the matrix is continuously deformedalmost everywhereonCd{\displaystyle \mathbb {C} ^{d}}but, in general,noteverywhere: there is some critical submanifold ofCd{\displaystyle \mathbb {C} ^{d}}on which the Jordan form abruptly changes its structure whenever the parameter crosses or simply "travels" around it (monodromy). Such changes mean that several Jordan blocks (either belonging to different eigenvalues or not) join to a unique Jordan block, or vice versa (that is, one Jordan block splits into two or more different ones). Many aspects ofbifurcation theoryfor both continuous and discrete dynamical systems can be interpreted with the analysis of functional Jordan matrices. From thetangent spacedynamics, this means that the orthogonal decomposition of the dynamical system'sphase spacechanges and, for example, different orbits gain periodicity, or lose it, or shift from a certain kind of periodicity to another (such asperiod-doubling, cfr.logistic map). In a sentence, the qualitative behaviour of such a dynamical system may substantially change as theversal deformationof the Jordan normal form ofA(c). The simplest example of adynamical systemis a system of linear, constant-coefficient, ordinary differential equations; that is, letA∈Mn(C){\displaystyle A\in \mathbb {M} _{n}(\mathbb {C} )}andz0∈Cn{\displaystyle \mathbf {z} _{0}\in \mathbb {C} ^{n}}:z˙(t)=Az(t),z(0)=z0,{\displaystyle {\begin{aligned}{\dot {\mathbf {z} }}(t)&=A\mathbf {z} (t),\\\mathbf {z} (0)&=\mathbf {z} _{0},\end{aligned}}}whose direct closed-form solution involves computation of thematrix exponential:z(t)=etAz0.{\displaystyle \mathbf {z} (t)=e^{tA}\mathbf {z} _{0}.} Another way, provided the solution is restricted to the localLebesgue spaceofn-dimensional vector fieldsz∈Lloc1(R+)n{\displaystyle \mathbf {z} \in \mathrm {L} _{\mathrm {loc} }^{1}(\mathbb {R} _{+})^{n}}, is to use itsLaplace transformZ(s)=L[z](s){\displaystyle \mathbf {Z} (s)={\mathcal {L}}[\mathbf {z} ](s)}. In this caseZ(s)=(sI−A)−1z0.{\displaystyle \mathbf {Z} (s)=\left(sI-A\right)^{-1}\mathbf {z} _{0}.} The matrix function(A−sI)−1is called theresolvent matrixof thedifferential operatorddt−A{\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}-A}. It ismeromorphicwith respect to the complex parameters∈C{\displaystyle s\in \mathbb {C} }since its matrix elements are rational functions whose denominator is equal for all todet(A−sI). Its polar singularities are the eigenvalues ofA, whose order equals their index for it; that is,ord(A−sI)−1λ=idxAλ{\displaystyle \mathrm {ord} _{(A-sI)^{-1}}\lambda =\mathrm {idx} _{A}\lambda }.
https://en.wikipedia.org/wiki/Jordan_matrix
Aregular verbis anyverbwhoseconjugationfollows the typical pattern, or one of the typical patterns, of the language to which it belongs. A verb whose conjugation follows a different pattern is called anirregular verb. This is one instance of the distinction betweenregular and irregular inflection, which can also apply to other word classes, such as nouns and adjectives. InEnglish, for example, verbs such asplay,enter, andlikeare regular since they form their inflected parts by adding the typical endings-s,-ingand-edto give forms such asplays,entering, andliked. On the other hand, verbs such asdrink,hitandhaveare irregular since some of their parts are not made according to the typical pattern:drankanddrunk(not "drinked");hit(aspast tenseandpast participle, not "hitted") andhasandhad(not "haves" and "haved"). The classification of verbs asregularorirregularis to some extent a subjective matter. If some conjugationalparadigmin a language is followed by a limited number of verbs, or if it requires the specification of more than oneprincipal part(as with theGerman strong verbs), views may differ as to whether the verbs in question should be considered irregular. Most inflectional irregularities arise as a result of series of fairly uniform historical changes so forms that appear to be irregular from asynchronic(contemporary) point of view may be seen as following more regular patterns when the verbs are analyzed from adiachronic(historical linguistic) viewpoint. When a language develops some type ofinflection, such as verbconjugation, it normally produces certain typical (regular) patterns by which words in the givenclasscome to make their inflected forms. The language may develop a number of different regular patterns, either as a result of conditionalsound changeswhich cause differentiation within a single pattern, or through patterns with different derivations coming to be used for the same purpose. An example of the latter is provided by the strong andweakverbs of theGermanic languages; the strong verbs inherited their method of making past forms (vowelablaut) fromProto-Indo-European, while for the weak verbs a different method (addition ofdentalsuffixes) developed. Irregularities in verb conjugation (and otherinflectional irregularities) may arise in various ways. Sometimes the result of multiple conditional and selective historical sound changes is to leave certain words following a practically unpredictable pattern. This has happened with the strong verbs (and some groups of weak verbs) in English; patterns such assing–sang–sungandstand–stood–stood, although they derive from what were more or less regular patterns in older languages, are now peculiar to a single verb or small group of verbs in each case, and are viewed as irregular. Irregularities may also arise fromsuppletion– forms of one verb may be taken over and used as forms of another. This has happened in the case of the English wordwent, which was originally the past tense ofwend, but has come to be used instead as the past tense ofgo. The verbbealso has a number of suppletive forms (be,is,was, etc., with various different origins) – this is common forcopular verbsin Indo-European languages. The regularity and irregularity of verbs is affected by changes taking place by way ofanalogy– there is often a tendency for verbs to switch to a different, usually more regular, pattern under the influence of other verbs. This is less likely when the existing forms are very familiar through common use – hence among the most common verbs in a language (likebe,have,go, etc.) there is often a greater incidence of irregularity. (Analogy can occasionally work the other way, too – someirregular English verb formssuch asshown,caughtandspathave arisen through the influence of existing strong or irregular verbs.)[citation needed] The most straightforward type of regular verb conjugation pattern involves a single class of verbs, a singleprincipal part(therootor one particular conjugated form), and a set of exact rules which produce, from that principal part, each of the remaining forms in the verb'sparadigm. This is generally considered to be the situation with regularEnglish verbs– from the one principal part, namely the plain form of a regular verb (the bareinfinitive, such asplay,happen,skim,interchange, etc.), all the other inflected forms (which in English are not numerous; they consist of the third person singularpresent tense, thepast tenseandpast participle, and thepresent participle/gerundform) can be derived by way of consistent rules. These rules involve the addition of inflectional endings (-s,-[e]d,-ing), together with certainmorphophonologicalrules about how those endings are pronounced, and certain rules of spelling (such as the doubling of certain consonants). Verbs which in any way deviate from these rules (there arearound 200such verbs in the language) are classed as irregular. A language may have more than one regular conjugation pattern.French verbs, for example, follow different patterns depending on whether their infinitive ends in-er,-iror-re(complicated slightly by certain rules of spelling). A verb which does not follow the expected pattern based on the form of its infinitive is considered irregular. In some languages, however, verbs may be considered regular even if the specification of one of their forms is not sufficient to predict all of the rest; they have more than one principal part. InLatin, for example, verbs are considered to have four principal parts (seeLatin conjugationfor details). Specification of all of these four forms for a given verb is sufficient to predict all of the other forms of that verb – except in a few cases, when the verb is irregular. To some extent it may be a matter of convention or subjective preference to state whether a verb is regular or irregular. In English, for example, if a verb is allowed to have three principal parts specified (the bare infinitive, past tense and past participle), then the number of irregular verbs will be drastically reduced (this is not the conventional approach, however). The situation is similar with the strong verbs inGerman(these may or may not be described as irregular). In French, what are traditionally called the "regular-reverbs" (those that conjugate likevendre) are not in fact particularly numerous, and may alternatively be considered to be just another group of similarly behaving irregular verbs. The most unambiguously irregular verbs are often very commonly used verbs such as thecopular verbbein English and its equivalents in other languages, which frequently have a variety ofsuppletiveforms and thus follow an exceptionally unpredictable pattern of conjugation. It is possible for a verb to be regular in pronunciation, but irregular inspelling. Examples of this are the English verbslayandpay. In terms of pronunciation, these make their past forms in the regular way, by adding the/d/sound. However their spelling deviates from the regular pattern: they are not spelt (spelled) "layed" and "payed" (although the latter form is used in some e.g. nautical contexts as "the sailor payed out the anchor chain"), butlaidandpaid. This contrasts with fully regular verbs such asswayandstay, which have the regularly spelt past formsswayedandstayed. The English present participle is never irregular in pronunciation, with the exception thatsingeingirregularly retains theeto distinguish it fromsinging. In linguistic analysis, the concept of regular and irregular verbs (and other types ofregular and irregular inflection) commonly arises inpsycholinguistics, and in particular in work related tolanguage acquisition. In studies of first language acquisition (where the aim is to establish how the human brain processes its native language), one debate among 20th-century linguists revolved around whether small children learn all verb forms as separate pieces of vocabulary or whether they deduce forms by the application of rules.[1]Since a child can hear a regular verb for the first time and immediately reuse it correctly in a different conjugated form which he or she has never heard, it is clear that the brain does work with rules; but irregular verbs must be processed differently. A common error for small children is to conjugate irregular verbs as though they were regular, which is taken as evidence that we learn and process our native language partly by the application of rules, rather than, as some earlier scholarship had postulated, solely by learning the forms. In fact, children often use the most common irregular verbs correctly in their earliest utterances but then switch to incorrect regular forms for a time when they begin to operate systematically. That allows a fairly precise analysis of the phases of this aspect of first language acquisition. Regular and irregular verbs are also of significance insecond language acquisition, and in particular inlanguage teachingand formal learning, where rules such as verb paradigms are defined, and exceptions (such as irregular verbs) need to be listed and learned explicitly. The importance of irregular verbs is enhanced by the fact that they often include the most commonly used verbs in the language (including verbs such asbeandhavein English, their equivalentsêtreandavoirinFrench,seinandhabeninGerman, etc.). Inhistorical linguisticsthe concept of irregular verbs is not so commonly referenced. Since most irregularities can be explained by processes of historical language development, these verbs are only irregular when viewedsynchronically; they often appear regular when seen in their historical context. In the study ofGermanic verbs, for example, historical linguists generally distinguish between strong and weak verbs, rather than irregular and regular (although occasional irregularities still arise even in this approach). When languages are being compared informally, one of the few quantitative statistics which are sometimes cited is the number of irregular verbs. These counts are not particularly accurate for a wide variety of reasons, and academic linguists are reluctant to cite them. But it does seem that some languages have a greater tolerance for paradigm irregularity than others. With the exception of the highly irregular verbbe, an English verb can have up to five forms: its plain form (or bareinfinitive), a third person singularpresent tense, apast tense(orpreterite), apast participle, and the-ingform that serves as both apresent participleandgerund. The rules for the formation of the inflected parts ofregularverbs are given in detail in the article onEnglish verbs. In summary they are as follows: The irregular verbs of English are described and listed in the articleEnglish irregular verbs(for a more extensive list, seeList of English irregular verbs). In the case of these: Some examples of common irregular verbs in English, other than modals, are:[3] For regular and irregular verbs in other languages, see the articles on the grammars of those languages. Particular articles include, for example: Some grammatical information relating to specific verbs in various languages can also be found inWiktionary. Mostnatural languages, to different extents, have a number of irregular verbs. Artificialauxiliary languagesusually have a single regular pattern for all verbs (as well as otherparts of speech) as a matter of design, because inflectional irregularities are considered to increase the difficulty of learning and using a language. Otherconstructed languages, however, need not show such regularity, especially if they are designed to look similar to natural ones. The auxiliary languageInterlinguahas some irregular verbs, principallyesser"to be", which has an irregular present tense formes"is" (instead of expectedesse), an optional pluralson"are", an optional irregular past tenseera"was/were" (alongside regularesseva), and a unique subjunctive formsia(which can also function as an imperative). Other common verbs also have irregular present tense forms, namelyvader"to go" —va,ir"to go" —va(also shared by the present tense ofvader), andhaber"to have" —ha.
https://en.wikipedia.org/wiki/Irregular_verb
Kinectis a discontinued line ofmotion sensinginput devicesproduced byMicrosoftand first released in 2010. The devices generally containRGBcameras, andinfraredprojectors and detectors that map depth through eitherstructured lightortime of flightcalculations, which can in turn be used to perform real-timegesture recognitionand body skeletal detection, among other capabilities. They also contain microphones that can be used forspeech recognitionandvoice control. Kinect was originally developed as amotion controllerperipheral forXboxvideo game consoles, distinguished from competitors (such as Nintendo'sWii Remoteand Sony'sPlayStation Move) by not requiring physical controllers. The first-generation Kinect was based on technology from Israeli companyPrimeSense, and unveiled atE32009 as a peripheral forXbox 360codenamed "Project Natal". It was first released on November 4, 2010, and would go on to sell eight million units in its first 60 days of availability. The majority of the games developed for Kinect werecasual, family-oriented titles, which helped to attract new audiences to Xbox 360, but did not result in wide adoption by the console's existing, overall userbase. As part of the 2013 unveiling of Xbox 360's successor,Xbox One, Microsoft unveiled a second-generation version of Kinect with improved tracking capabilities. Microsoft also announced that Kinect would be a required component of the console, and that it would not function unless the peripheral is connected. The requirement proved controversial among users and critics due to privacy concerns, prompting Microsoft to backtrack on the decision. However, Microsoft still bundled the new Kinect with Xbox One consoles upon their launch in November 2013. A market for Kinect-based games still did not emerge after the Xbox One's launch; Microsoft would later offer Xbox One hardware bundles without Kinect included, and later revisions of the console removed the dedicated ports used to connect it (requiring a powered USB adapter instead). Microsoft ended production of Kinect for Xbox One in October 2017. Kinect has also been used as part of non-game applications in academic and commercial environments, as it was cheaper and more robust than other depth-sensing technologies at the time. While Microsoft initially objected to such applications, it later releasedsoftware development kits(SDKs) for the development ofMicrosoft Windowsapplications that use Kinect. In 2020, Microsoft releasedAzure Kinectas a continuation of the technology integrated with theMicrosoft Azurecloud computing platform. Part of the Kinect technology was also used within Microsoft'sHoloLensproject. Microsoft discontinued the Azure Kinect developer kits in October 2023.[12][13] The origins of the Kinect started around 2005, at a point where technology vendors were starting to developdepth-sensing cameras. Microsoft had been interested in a 3D camera for the Xbox line earlier but because the technology had not been refined, had placed it in the "Boneyard", a collection of possible technology they could not immediately work on.[14] In 2005,PrimeSensewas founded by tech-savvy mathematicians and engineers from Israel to develop the "next big thing" for video games, incorporating cameras that were capable of mapping a human body in front of them and sensing hand motions. They showed off their system at the 2006Game Developers Conference, where Microsoft'sAlex Kipman, the general manager of hardware incubation, saw the potential in PrimeSense's technology for the Xbox system. Microsoft began discussions with PrimeSense about what would need to be done to make their product more consumer-friendly: not only improvements in the capabilities of depth-sensing cameras, but a reduction in size and cost, and a means to manufacturer the units at scale was required. PrimeSense spent the next few years working at these improvements.[14] Nintendoreleased theWiiin November 2006. The Wii's central feature was theWii Remote, a handheld device that was detected by the Wii through a motion sensor bar mounted onto a television screen to enablemotion controlled games. Microsoft felt pressure from the Wii, and began looking into depth-sensing in more detail with PrimeSense's hardware, but could not get to the level of motion tracking they desired. While they could determine hand gestures, and sense the general shape of a body, they could not do skeletal tracking. A separate path within Microsoft looked to create an equivalent of the Wii Remote, considering that this type of unit may become standardized similar to how two-thumbstick controllers became a standard feature.[14]However, it was still ultimately Microsoft's goal to remove any device between the player and the Xbox.[14] Kudo Tsunoda and Darren Bennett joined Microsoft in 2008, and began working with Kipman on a new approach to depth-sensing aided bymachine learningto improve skeletal tracking. They internally demonstrated this and established where they believed the technology could be in a few years, which led to the strong interest to fund further development of the technology; this has also occurred at a time that Microsoft executives wanted to abandon the Wii-like motion tracking approach, and favored the depth-sensing solution to present a product that went beyond the Wii's capabilities. The project was greenlit by late 2008 with work started in 2009.[14] The project was codenamed "Project Natal" after the Brazilian cityNatal, Kipman's birthplace. Additionally, Kipman recognized theLatinorigins of the word "natal" to mean "to be born", reflecting the new types of audiences they hoped to draw with the technology.[15]Much of the initial work was related toethnographicresearch to see how video game players' home environments were laid out, lit, and how those with Wiis used the system to plan how Kinect units would be used. The Microsoft team discovered from this research that the up-and-down angle of the depth-sensing camera would either need to be adjusted manually, or would require an expensive motor to move automatically. Upper management at Microsoft opted to include the motor despite the increased cost to avoid breaking game immersion. Kinect project work also involved packaging the system for mass production and optimizing its performance. Hardware development took around 22 months.[14] During hardware development, Microsoft engaged with software developers to use Kinect. Microsoft wanted to make games that would be playable by families since Kinect could sense multiple bodies in front of it. One of the first internal titles developed for the device was the pack-in gameKinect Adventuresdeveloped by Good Science Studio that was part ofMicrosoft Studios. One of the game modes ofKinect Adventureswas "Reflex Ridge", based on the JapaneseBrain Wallgame where players attempt to contort their bodies in a short time to match cutouts of a wall moving at them. This type of game was a key example of the type of interactivity they wanted with Kinect, and its development helped feed into the hardware improvements.[14] Nearing the planned release, there was a problem of widespread testing of Kinect in various room types and different bodies accounting for age, gender, and race among other factors, while keeping the details of the unit confidential. Microsoft engaged in a company-wide program offering employees to take home Kinect units to test them. Microsoft also brought other non-gaming divisions, including itsMicrosoft Research,Microsoft Windows, andBingteams to help complete the system. Microsoft established its own large-scale manufacturing facility to bulk product Kinect units and test them.[14] Kinect was first announced to the public as "Project Natal" on June 1, 2009, during Microsoft's press conference atE3 2009; film directorSteven Spielbergjoined Microsoft'sDon Mattrickto introduce the technology and its potential.[14][16]Three demos were presented during the conference—Microsoft'sRicochetandPaint Party, andLionhead Studios'Milo & Katecreated byPeter Molyneux—while a Project Natal-enabled version ofCriterion Games'Burnout Paradisewas shown during the E3 exhibition.[17][18]By E3 2009, the skeletal mapping technology was capable of simultaneously tracking four people,[19][20][21][22]with a feature extraction of 48skeletalpoints on a human body at 30 Hz.[22][23]Microsoft had not committed to a release date for Project Natal at E3 2009, but affirmed it would be after 2009, and likely in 2010 to stay competitive with the Wii and thePlayStation Move(Sony Interactive Entertainment's own motion-sensing system using hand-held devices).[24] In the months following E3 2009, rumors that a new Xbox 360 console associated with Project Natal emerged, either aretail configurationthat incorporated the peripheral,[25][26]or as a hardware revision or upgrade to support the peripheral.[27][28]Microsoft dismissed the reports in public and repeatedly emphasized that Project Natal would be fully compatible with all Xbox 360 consoles. Microsoft indicated that the company considered Project Natal to be a significant initiative, as fundamental to Xbox brand asXbox Live,[24]and with a planned launch akin to that of a new Xbox console platform.[29]Microsoft's vice president Shane Kim said the company did not expect Project Natal would extend the anticipated lifetime of the Xbox 360, which had been planned to last ten years through 2015, nor delay the launch of the successor to the Xbox 360.[20][30] Following the E3 2009 show and through 2010, the Project Natal team members experimentally adapted numerous games to Kinect-based control schemes to help evaluate usability. Among these games wereBeautiful KatamariandSpace Invaders Extreme, which were demonstrated atTokyo Game Showin September 2009.[31]According to Tsunoda, adding Project Natal-based control to pre-existing games involved significant code alterations, and made it unlikely that existing games could be patched through software updates to support the unit.[32]Microsoft also expanded its draw to third-party developers to encourage them to develop Project Natal games. Companies likeHarmonixandDouble Finequickly took to Project Natal and saw the potential in it, and committed to developing games for the unit, such as the launch titleDance Centralfrom Harmonix.[14] Although its sensor unit was originally planned to contain a microprocessor that would perform operations such as the system's skeletal mapping, Microsoft reported in January 2010 that the sensor would no longer feature a dedicated processor. Instead, processing would be handled by one of theprocessor coresof Xbox 360'sXenonCPU.[33]Around this time, Kipmen estimated that the Kinect would only take about 10 to 15% of the Xbox 360's processing power.[34]While this was a small fraction of the Xbox 360's capabilities, industry observers believed this further pointed to difficulties in adapting pre-existing games to use Kinect, as the motion-tracking would add to a game's high computational load and exceed the Xbox 360's capabilities. These observers believed that instead the industry would develop games specific to the Kinect features.[33] During Microsoft'sE3 2010press conference, it was announced that Project Natal would be officially branded as Kinect, and be released in North America on November 4, 2010.[35]Xbox Live directorStephen Toulousestated that the name was aportmanteauof the words "kinetic" and "connection", key aspects of the Kinect initiative.[36][37]Microsoft and third-party studios exhibited Kinect-compatible games during the E3 exhibition.[38]A newslim revision of the Xbox 360was also unveiled to coincide with Kinect's launch, which added a dedicated port for attaching the peripheral;[39]Kinect would be sold at launch as a standalone accessory for existing Xbox 360 owners, and as part of bundles with the new slim Xbox 360. All units includedKinect Adventuresas apack-in game.[40][41] Microsoft continued to refine the Kinect technology in the months leading to the Kinect launch in November 2010. By launch, Kipman reported they had been able to reduce the Kinect's use of the Xbox 360's processor from 10 to 15% as reported in January 2010 to a "single-digit percentage".[42] Xbox product director Aaron Greenberg stated that Microsoft's marketing campaign for Kinect would carry a similar scale to a console launch;[41]the company was reported to have budgeted $500 million on advertising for the peripheral, such as television and print ads, campaigns withBurger King[43]andPepsi,[44]and a launch event inNew York City'sTimes Squareon November 3 featuring a performance byNe-Yo.[45]Kinect was launched in North America on November 4, 2010;[2]in Europe on November 10, 2010;[1]in Australia, New Zealand, and Singapore on November 18, 2010;[4][46][47]and in Japan on November 20, 2010.[48] The Kinect release for the Xbox 360 was estimated to have sold eight million units in the first sixty days of release, earning the hardware theGuinness World Recordfor the "Fastest-Selling Consumer Electronics Device".[14]Over 10 million had been sold by March 2011.[14]While seemingly successful, its launch titles were primarily family-oriented games (which could be designed around Kinect's functionality and limitations), which may have drawn new audiences, but did not have the selling power of major franchises likeBattlefieldandCall of Duty—which were primarily designed around theXbox 360 controller. Only an estimated 20% of the 55 million Xbox 360 owners had purchased the Kinect.[14]The Kinect team recognized some of the downsides with more traditional games and Kinect, and continued ongoing development of the unit to be released as a second-generation unit, such as reducing the latency of motion detection and improving speech recognition. Microsoft provided news of these changes to the third-party developers to help them anticipate how the improvements can be integrated into the games.[14] Concurrent with the Kinect improvements, Microsoft's Xbox hardware team had started planning for theXbox Onearound mid-2011. Part of early Xbox One specifications was that the new Kinect hardware would be automatically included with the console, so that developers would know that Kinect hardware would be available for any Xbox One, and hoping to encourage developers to take advantage of that.[14]The Xbox One was first formally announced on May 23, 2013, and shown in more detail atE3 2013in June. Microsoft stated at these events that the Xbox One would include the updated Kinect hardware and it would be required to be plugged in at all times for the Xbox One to function. This raised concerns across the video game media: privacy advocates argued that Kinect sensor data could be used fortargeted advertising, and to perform unauthorizedsurveillanceon users. In response to these claims, Microsoft reiterated that Kinect voice recognition and motion tracking can be disabled by users, that Kinect data cannot be used for advertising per itsprivacy policy, and that the console would not redistribute user-generated content without permission.[49][50][51][52][53][54]Several other issues with the Xbox One's original feature set had also come up, such as the requirement to be always connected to the Internet, and created a wave of consumer backlash against Microsoft.[14] Microsoft announced in August 2013 that they had made several changes to the planned Xbox One release in response to the backlash. Among these was that the system would no longer require a Kinect unit to be plugged in to work, though it was still planned to package the Kinect with all Xbox One systems. However, this also required Microsoft to establish aUS$500price-point for the Xbox One/Kinect system at its November 2013 launch,US$100more than the competingPlayStation 4launched in the same time frame, which did not include any motion-sensing hardware.[14]In the months after the Xbox One release, Microsoft decided to launch a Kinect-less Xbox One system in March 2014 at the same price as the PlayStation 4, after considering that the Kinect for Xbox One had not gotten the developer support, and sales of the Xbox One were lagging due to the higher price tag of the Kinect-bundled system. Richard Irving, a program group manager that oversaw Kinect, said that Microsoft had felt that it was more important to give developers and consumers the option of developing for or purchasing the Kinect rather than forcing the unit on them.[14] The removal of Kinect from the Xbox One retail package was the start of the rapid decline and phase-out of the unit within Microsoft. Developers like Harmonix that had been originally targeting games to use the Xbox One had put these games on hold until they knew there was enough of a Kinect install base to justify release, which resulted in a lack of games for the Kinect and reducing any consumer drive to buy the separate unit.[14]Microsoft became bearish on the Kinect, making no mention of the unit atE3 2015and announcing atE3 2016that the upcoming Xbox One hardware revision, theXbox One S, would not have a dedicated Kinect port; Microsoft offered a USB adapter for the Kinect, provided free during an initial promotional period after the console's launch.[55]The more powerfulXbox One Xalso lacked the Kinect port and required this adapter.[56]Even though developers still released Kinect-enabled games for the Xbox One, Microsoft's lack of statements related to the Kinect during this period led to claims that the Kinect was a dead project at Microsoft.[57][58] Microsoft formally announced it would stop manufacturing Kinect for Xbox One on October 25, 2017.[10]Microsoft eventually discontinued the adapter in January 2018, stating that they were shifting to manufacture other accessories for the Xbox One and personal computers that were more in demand. This is considered by the media to be the point where Microsoft ceased work on the Kinect for the Xbox platform.[14][56] While the Kinect unit for the Xbox platform had petered out, the Kinect was being used in academia and other applications since around 2011. The functionality of the unit along with its lowUS$150cost was seen to be an inexpensive means to add depth-sensing to existing applications, offsetting the high cost and unreliability of other 3D camera options at the time. Inrobotics, Kinect's depth-sensing would enable robots to determine the shape and approximate distances to obstacles and maneuver around them.[59]Within the medical field, the Kinect could be used to monitor the shape and posture of a body in a quantifiable manner to enable improved health-care decisions.[60] Around November 2010, after the Kinect's launch, scientists, engineers, and hobbyists had been able tohackinto the Kinect to determine what hardware and internal software it had used, leading to users finding how to connect and operate the Kinect withMicrosoft WindowsandOS Xover USB, which has unsecured data from the various camera elements that could be read. This further led to prototype demos of other possible applications, such as a gesture-based user interface for the operating system similar to that shown in the filmMinority Report, as well aspornographicapplications.[61][62]This mirrored similar work to hack the Wii Remote a few years earlier to use its low-cost hardware for more advanced applications beyond gameplay.[63] Adafruit Industries, having envisioned some of the possible applications of the Kinect outside of gaming, issued a security challenge related to the Kinect, offering prize money for the successful development of anopen sourcesoftware development kit(SDK) and hardware drivers for the Kinect, which came to be known as Open Kinect.[64]Adafruit named the winner,Héctor Martín, by November 10, 2010,[65][66]who had produced aLinuxdriver that allows the use of both the RGB camera and depth sensitivity functions of the device.[67][68]It was later discovered thatJohnny Lee, a core member of Microsoft's Kinect development team, had secretly approached Adafruit with the idea of a driver development contest and had personally financed it.[69]Lee had said of the efforts to open the Kinect that "This is showing us the future...This is happening today, and this is happening tomorrow." and had engaged Adafruit with the contest as he been frustrated with trying to convince Microsoft's executives to explore the non-gaming avenue for the Kinect.[70] Microsoft initially took issue with users hacking into the Kinect, stating they would incorporate additional safeguards into future iterations of the unit to prevent such hacks.[61]However, by the end of November 2010, Microsoft had turned on their original position and embraced the external efforts to develop the SDK.[71]Kipman, in an interview withNPR, said The first thing to talk about is, Kinect was not actually hacked. Hacking would mean that someone got to our algorithms that sit inside of the Xbox and was able to actually use them, which hasn't happened. Or, it means that you put a device between the sensor and the Xbox for means of cheating, which also has not happened. That's what we call hacking, and that's what we have put a ton of work and effort to make sure doesn't actually occur. What has happened is someone wrote an open-source driver for PCs that essentially opens the USB connection, which we didn't protect, by design, and reads the inputs from the sensor. The sensor, again, as I talked earlier, has eyes and ears, and that's a whole bunch of noise that someone needs to take and turn into signal. PrimeSense along with robotics firmWillow Garageand game developer Side-Kick launchedOpenNI, a not-for-profit group to develop portable drivers for the Kinect and othernatural interface(NI) devices, in November 2010. Its first set of drivers named NITE were released in December 2010.[73][74]PrimeSense had also worked withAsusto develop a motion sensing device that competes with the Kinect for personal computers. The resulting product, the Wavi Xtion, was released in China in October 2011.[75][76] Microsoft announced in February 2011 that it was planning on releasing its own SDK for the Kinect within a few months, and which was officially released on June 16, 2011, but which was limited to non-commercial uses.[77][78]The SDK enabled users to access the skeletal motion recognition system for up to two persons and the Kinect microphone array, features that had not been part of the prior Open Kinect SDK.[79]Commercial interest in Kinect was still strong, with David Dennis, a product manager at Microsoft, stating "There are hundreds of organizations we are working with to help them determine what's possible with the tech".[80]Microsoft launched its Kinect for Windows program on October 31, 2011, releasing a new SDK to a small number of companies, includingToyota,Houghton Mifflin, and Razorfish, to explore what was possible.[80]At the 2012Consumer Electronics Showin January, Microsoft announced that it would release a dedicated Kinect for Windows unit along with the commercial SDK on February 1, 2012. The device included some hardware improvements, including support for "near mode" to recognize objects about 50 centimetres (20 in) in front of the cameras. The Kinect for Windows device was listed atUS$250,US$100more than the original Kinect since Microsoft had considered the Xbox 360 Kinect was subsidized through game purchases, Xbox Live subscriptions, and other costs.[70]At the launch, Microsoft stated that more than 300 companies from over 25 countries were working on Kinect-ready apps with the new unit.[81] With the original announcement of the revised Kinect for Xbox One in 2013, Microsoft also confirmed it would have a second generation of Kinect for Windows based on the updated Kinect technology by 2014.[82]The new Kinect 2 for Windows was launched on July 15, 2014, at aUS$200price.[83]Microsoft opted to discontinue the original Kinect for Windows by the end of 2014.[84]However, in April 2015, Microsoft announced they were also discontinuing the Kinect 2 for Windows, and instead directing commercial users to use the Kinect for Xbox One, which Microsoft said "perform identically". Microsoft stated that the demand for the Kinect 2 for Windows demand was high and difficult to keep up while also fulfilling the Kinect for Xbox One orders, and that they had found commercial developers successfully using the Kinect for Xbox One in their applications without issue.[85] With Microsoft's waning focus on Kinect, PrimeSense was bought byApple, Inc.in 2013, which incorporated parts of the technology into itsFace IDsystem foriOSdevices.[86][87] Though Kinect had been cancelled, the ideas of it helped to spur Microsoft into looking more intoaccessibilityfor Xbox and its games. According toPhil Spencer, the head of Xbox at Microsoft, they received positive comments from parents of disabled and impaired children who were happy that Kinect allowed their children to play video games. These efforts led to the development of theXbox Adaptive Controller, released in 2018, as one of Microsoft's efforts in this area.[88] Microsoft had abandoned the idea of Kinect for video games, but still explored the potential of Kinect beyond that. Microsoft's Director of Communications Greg Sullivan stated in 2018 that "I think one of the things that is beginning to be understood is that Kinect was never really just the gaming peripheral...It was always more."[89]Part of Kinect technology was integrated into Microsoft'sHoloLens, first released in 2016.[90] Microsoft announced that it was working on a new version of a hardware Kinect model for non-game applications that would integrate with theirAzurecloud computing services in May 2018. The use of cloud computing to offload some of the computational work from Kinect, as well as more powerful features enable by Azure such asartificial intelligencewould improve the accuracy of the depth-sensing and reduce the power demand and would lead to more compact units, Microsoft had envisioned.[91]The Azure Kinect device was released on June 27, 2019, at a price ofUS$400, while the SDK for the unit had been released in February 2019.[92] Sky UKannounced a new line of Sky Glass television units to launch in 2022 that incorporate the Kinect technology in partnership with Microsoft. Using the Kinect features, the viewer will be able to control the television through motion controls and audio commands, and supports social features such associal viewing.[93] Microsoft announced that the Azure Kinect hardware kit will be discontinued in October 2023, and will refer users to third party suppliers for spare parts.[94] The depth and motion sensing technology at the core of the Kinect is enabled through its depth-sensing. The original Kinect for Xbox 360 usedstructured lightfor this: the unit used a near-infraredpattern projected across the space in front of the Kinect, while an infrared sensor captured the reflected light pattern. The light pattern is deformed by the relative depth of the objects in front it, and mathematics can be used to estimate that depth based on several factors related to the hardware layout of the Kinect. While other structure light depth-sensing technologies used multiple light patterns, Kinect used as few as one as to achieve a high rate of 30 frames per second of depth sensing. Kinect for Xbox One switched over to usingtime of flightmeasurements. The infrared projector on the Kinect sends out modulated infrared light which is then captured by the sensor. Infrared light reflecting off closer objects will have a shorter time of flight than those more distant, so the infrared sensor captures how much the modulation pattern had been deformed from the time of flight, pixel-by-pixel. Time of flight measurements of depth can be more accurate and calculated in a shorter amount of time, allowing for more frames-per-second to be detected.[95] Once Kinect has a pixel-by-pixel depth image, Kinect uses a type ofedge detectionhere to delineate closer objects from the background of the shot, incorporating input from the regular visible light camera. The unit then attempts to track any moving objects from this, with the assumption that only people will be moving around in the image, and isolates the human shapes from the image. The unit's software, aided by artificial intelligence, performs segmentation of the shapes to try to identify specific body parts, like the head, arms, and hands, and track those segments individually. Those segments are used to construct a 20-point skeleton of the human body, which then can be used by game or other software to determine what actions the person has performed.[96] Kinect for Xbox 360was a combination ofMicrosoftbuilt software and hardware. The hardware included arange chipsettechnology byIsraelideveloperPrimeSense, which developed a system consisting of aninfraredprojector and camera and a specialmicrochipthat generates a grid from which the location of a nearby object in 3 dimensions can be ascertained.[97][98][99]This3D scannersystem calledLight Coding[100]employs a variant of image-based3D reconstruction.[101][102] The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an "RGBcamera,depth sensorandmicrophone arrayrunning proprietary software",[103]which provide full-body 3Dmotion capture,facial recognitionandvoice recognitioncapabilities. At launch, voice recognition was only made available in Japan, United Kingdom, Canada and United States. Mainland Europe received the feature later in spring 2011.[104]Currently voice recognition is supported inAustralia,Canada,France,Germany,Ireland,Italy,Japan,Mexico,New Zealand,United KingdomandUnited States. The Kinect sensor's microphone array enables Xbox 360 to conductacoustic source localizationandambient noise suppression, allowing for things such as headset-free party chat overXbox Live.[105] The depth sensor consists of aninfraredlaserprojector combined with a monochromeCMOS sensor, which captures video data in 3D under anyambient lightconditions.[105][19]The sensing range of the depth sensor is adjustable, and Kinect software is capable of automatically calibrating the sensor based on gameplay and the player's physical environment, accommodating for the presence of furniture or other obstacles.[23] Described by Microsoft personnel as the primary innovation of Kinect,[20][106][107]the software technology enables advancedgesture recognition, facial recognition and voice recognition.[21]According to information supplied to retailers, Kinect is capable of simultaneously tracking up to six people, including two active players formotion analysiswith afeature extractionof 20 joints per player.[108]However, PrimeSense has stated that the number of people the device can "see" (but not process as players) is only limited by how many will fit in the field-of-view of the camera.[109] Reverse engineering[110]has determined that the Kinect's various sensors output video at aframe rateof ≈9Hzto 30Hzdepending on resolution. The default RGB video stream uses 8-bit VGA resolution (640 × 480pixels) with aBayer color filter, but the hardware is capable of resolutions up to 1280x1024 (at a lower frame rate) and other colour formats such asUYVY. The monochrome depth sensing video stream is in VGA resolution (640 × 480 pixels) with11-bit depth, which provides 2,048 levels of sensitivity. The Kinect can also stream the view from its IR camera directly (i.e.: before it has been converted into a depth map) as 640x480 video, or 1280x1024 at a lower frame rate. The Kinect sensor has a practicalranginglimit of 1.2–3.5 m (3.9–11.5 ft) distance when used with the Xbox software. The area required to play Kinect is roughly 6 m2, although the sensor can maintain tracking through an extended range of approximately 0.7–6 m (2.3–19.7 ft). The sensor has anangular field of viewof 57°horizontally and 43° vertically, while the motorized pivot is capable oftiltingthe sensor up to 27° either up or down. The horizontal field of the Kinect sensor at the minimum viewing distance of ≈0.8 m (2.6 ft) is therefore ≈87 cm (34 in), and the vertical field is ≈63 cm (25 in), resulting in a resolution of just over 1.3 mm (0.051 in) per pixel. The microphone array features four microphone capsules[111]and operates with each channel processing 16-bitaudio at asampling rateof 16kHz.[108] Because the Kinect sensor's motorized tilt mechanism requires more power than the Xbox 360'sUSBports can supply,[112]the device makes use of a proprietary connector combining USB communication with additional power. RedesignedXbox 360 Smodels include a special AUX port for accommodating the connector,[113]while older models require a special power supply cable (included with the sensor)[111]that splits the connection into separate USB and power connections; power is supplied from themainsby way of an AC adapter.[112] Kinect for Windows is a modified version of the Xbox 360 unit which was first released on February 1, 2012, alongside the SDK for commercial use.[70][114]The hardware included better components to eliminate noise along the USB and other cabling paths, and improvements in the depth-sensing camera system for detection of objects at close range, as close as 50 centimetres (20 in), in the new "Near Mode".[70] The SDK includedWindows 7compatiblePCdrivers for Kinect device. It provided Kinect capabilities to developers to build applications withC++,C#, orVisual Basicby usingMicrosoft Visual Studio 2010and included the following features: In March 2012, Craig Eisler, the general manager of Kinect for Windows, said that almost 350 companies are working with Microsoft on custom Kinect applications for Microsoft Windows.[116] In March 2012, Microsoft announced that next version of Kinect for Windows SDK would be available in May 2012. Kinect for Windows 1.5 was released on May 21, 2012. It adds new features, support for many new languages and debut in 19 more countries.[117][118] Kinect for Windows SDK for the first-generation sensor was updated a few more times, with version 1.6 released October 8, 2012,[122]version 1.7 released March 18, 2013,[123]and version 1.8 released September 17, 2013.[124] An upgraded iteration of Kinect was released on November 22, 2013, forXbox One. It uses a wide-angletime-of-flight camera, and processes 2 gigabits of data per second to read its environment. The new Kinect has greater accuracy with three times the fidelity over its predecessor and can track without visible light by using an activeIRsensor. It has a 60% wider field of vision with a minimum working distance of 0.91 metres (3.0 ft) away from the sensor, compared to 1.83 metres (6.0 ft) for the original Kinect,[125]and can track up to 6 skeletons at once. It can also detect a player'sheart rate, facial expression, the position and orientation of 25 individual joints (including thumbs), the weight put on each limb, speed of player movements, and track gestures performed with a standard controller. The color camera captures 1080p video that can be displayed in the same resolution as the viewing screen, allowing for a broad range of scenarios. In addition to improving video communications and video analytics applications, this provides a stable input on which to build interactive applications. Kinect's microphone is used to provide voice commands for actions such as navigation, starting games, and waking the console fromsleep mode.[126][127]The recommended player's height is at least 40 inches, which roughly corresponds to children of4+1⁄2years old and up.[128][129] All Xbox One consoles were initially shipped with Kinect included.[54]In June 2014, bundles without Kinect were made available,[130]along with an updated Xbox One SDK allowing game developers to explicitly disable Kinect skeletal tracking, freeing up system resources that were previously reserved for Kinect even if it was disabled or unplugged.[130][131]As interest in Kinect waned in 2014, later revisions of the Xbox One hardware, including theXbox One SandXbox One X, dropped the dedicated Kinect port, requiring users to purchase a USB 3.0 and AC adapter to use the Kinect for Xbox One.[132][133] A standalone Kinect for Xbox One, bundled with a digital copy ofDance Central Spotlight, was released on October 7, 2014.[134] Considered a market failure compared to the Kinect for Xbox 360, the Kinect for Xbox One product was discontinued by October 25, 2017. Production of the adapter cord also ended by January 2018.[9] Released on 15 July 2014, Kinect 2 for Windows is based on the Kinect for Xbox One and considered a replacement of the original Kinect for Windows. It was also repackaged as "Kinect for Windows v2". It is nearly identical besides the removal of Xbox branding, and included a USB 3.0/AC adapter. It released alongside version 2.0 of the Windows SDK for the platform. The MSRP wasUS$199.[83][8][135][85]Microsoft considers the Kinect 2 for Windows equivalent in performance to the Xbox One version. In April 2015, having difficulty in keeping up manufacturing demand for the Kinect for Xbox One, this edition was discontinued. Microsoft directed commercial users to use the Xbox One version with a USB adapter instead.[85][136][8][135][137] On May 7, 2018, Microsoft announced a new iteration of Kinect technology designed primarily for enterprise software andartificial intelligenceusage. It is designed around theMicrosoft Azurecloud platform, and is meant to "leverage the richness of Azure AI to dramatically improve insights and operations".[138][139]It has a smaller form factor than the Xbox iterations of Kinect, and features a 12-megapixel camera, a time-of-flight depth sensor also used on theHoloLens 2, and seven microphones. A development kit was announced in February 2019.[140][141] Requiring at least 190 MB of available storage space,[142]Kinect system software allows users to operateXbox 360 Dashboard console user interfacethrough voice commands and hand gestures. Techniques such as voice recognition and facial recognition are employed to automatically identify users. Among the applications for Kinect is Video Kinect, which enablesvoice chatorvideo chatwith other Xbox 360 users or users ofWindows Live Messenger. The application can use Kinect's tracking functionality and Kinect sensor's motorized pivot to keep users in frame even as they move around. Other applications with Kinect support includeESPN,Zune Marketplace,[142]Netflix,Hulu Plus[143]andLast.fm.[144]Microsoft later confirmed that all forthcoming applications would be required to have Kinect functionality for certification.[145] The Xbox One originally shipped in bundles with the Kinect; the originalXbox One user interface softwarehad similar support for Kinect features as the Xbox 360 software, such as voice commands, user identification via skeletal or vocal recognition, and gesture-driven commands, though these features could be fully disabled due to privacy concerns.[146]However, this had left the more traditional navigation using a controller haphazard. In May 2014, when Microsoft announced it would be releasing Xbox One systems without a Kinect, the company also announced plans to alter the Xbox One system software to remove Kinect features.[147]Kinect support in the software was fully removed by November 2015.[148] Xbox 360 games that require Kinect are packaged in special purple cases (as opposed to the green cases used by all other Xbox 360 games), and contain a prominent "Requires Kinect Sensor" logo on their front cover. Games that include features utilizing Kinect, but do not require it for standard gameplay, have "Better with Kinect Sensor" branding on their front covers.[149] Kinect launched on November 4, 2010, with 17 titles.[150]Third-party publishers of available and announced Kinect games include, among others,Ubisoft,Electronic Arts,LucasArts,THQ,Activision,Konami,Sega,Capcom,Namco BandaiandMTV Games. Along with retail games, there are also selectXbox Live Arcadetitles which require the peripheral. KinectShare.com was a website where players could upload video game pictures, videos, and achievements, from their Xbox 360.[151]It was released alongside the Kinect in November 2010. A blog was released on the website in October 2011, showcasing official Kinect news, which was discontinued after July 2012.[152]It was used by multiple Kinect games, includingDance Central 2,Kinect Adventures!,Kinect Fun Labs,KinectRush: A Disney–Pixar Adventure,Kinect Sports, andKinect Sports: Season Two.[153]The website was shutdown in June 2017, a few months prior to the discontinuation of the Kinect, redirecting to Xbox.com.[151]The KinectShare feature on the Xbox 360 was shutdown on July 28, 2017.[citation needed] AtE3 2011, Microsoft announcedKinect Fun Labs: a collection of various gadgets and minigames that are accessible from Xbox 360 Dashboard. These gadgets includesBuild A Buddy,Air Band,Kinect Googly Eyes,Kinect Me,Bobblehead,Kinect Sparkler,Junk Fu[154]andAvatar Kinect.[155][156][157] Numerous developers are researching possible applications of Kinect that go beyond the system's intended purpose of playing games, further enabled by the release of the Kinect SDK by Microsoft.[158] For example, Philipp Robbel ofMITcombined Kinect withiRobot Createto map a room in 3D and have the robot respond to human gestures,[159]while an MIT Media Lab team is working on a JavaScript extension forGoogle Chromecalled depthJS that allows users to control the browser with hand gestures.[160]Other programmers, including Robot Locomotion Group at MIT, are using the drivers to develop a motion-controller user interface similar to the one envisioned inMinority Report.[161]The developers ofMRPThave integrated open source drivers into their libraries and provided examples of live 3D rendering and basic 3D visualSLAM.[162]Another team has shown an application that allows Kinect users to play a virtual piano by tapping their fingers on an empty desk.[163]Oliver Kreylos, a researcher atUniversity of California, Davis, adopted the technology to improve live 3-dimensionalvideoconferencing, whichNASAhas shown interest in.[164] Alexandre Alahi fromEPFLpresented a video surveillance system that combines multiple Kinect devices to track groups of people even in complete darkness.[165]Companies So touch and Evoluce have developed presentation software for Kinect that can be controlled by hand gestures; among its features is a multi-touch zoom mode.[166]In December 2010, the free public beta ofHTPCsoftwareKinEmotewas launched; it allows navigation ofBoxeeandXBMCmenus using a Kinect sensor.[167]Soroush Falahati wrote an application that can be used to createstereoscopic3D images with a Kinect sensor.[168] In human motion tracking, Kinect might suffer from occlusion which is when some human body joints are occluded and cannot be tracked accurately by Kinect's skeletal model.[169]Therefore, fusing its data with other sensors can provide a more robust tracking of the skeletal model. For instance, in a study, anUnscented Kalman filter(UKF) was used to fuse Kinect 3D position data of shoulder, elbow, and wrist joints to those obtained from twoinertial measurement units(IMUs) placed on the upper and lower arm of a person.[170]The results showed an improvement of up to 50% in the accuracy of the position tracking of the joints. In addition to solving the occlusion problem, as the sampling frequency of the IMUs was 100 Hz (compared to ~30 Hz for Kinect), the improvement of skeletal position was more evident during fast and dynamic movements. Kinect also shows compelling potential for use in medicine. Researchers at theUniversity of Minnesotahave used Kinect to measure a range of disorder symptoms in children, creating new ways of objective evaluation to detect such conditions as autism, attention-deficit disorder and obsessive-compulsive disorder.[171]Several groups have reported using Kinect for intraoperative, review of medical imaging, allowing the surgeon to access the information without contamination.[172][173]This technique is already in use atSunnybrook Health Sciences CentreinToronto, where doctors use it to guide imaging during cancer surgery.[174]At least one company, GestSure Technologies, is pursuing the commercialization of such a system.[175] NASA'sJet Propulsion Laboratory(JPL) signed up for the Kinect for Windows Developer program in November 2013 to use the new Kinect to manipulate a robotic arm in combination with anOculus Riftvirtual realityheadset, creating "the most immersive interface" the unit had built to date.[176] Upon its release, the Kinect garnered generally positive opinions from reviewers and critics.IGNgave the device 7.5 out of 10, saying that "Kinect can be a tremendous amount of fun for casual players, and the creative, controller-free concept is undeniably appealing", though adding that for "$149.99, a motion-tracking camera add-on for Xbox 360 is a tough sell, especially considering that the entry level variation of Xbox 360 itself is only $199.99".[179]Game Informerrated Kinect 8 out of 10, praising the technology but noting that the experience takes a while to get used to and that the spatial requirement may pose a barrier.[178]Computer and Video Gamescalled the device a technological gem and applauded the gesture and voice controls, while criticizing the launch lineup and Kinect Hub.[177] CNET's review pointed out how Kinect keeps players active with its full-body motion sensing but criticized the learning curve, the additional power supply needed for older Xbox 360 consoles and the space requirements.[180]Engadget, too, listed the large space requirements as a negative, along with Kinect's launch lineup and the slowness of the hand gesture UI. The review praised the system's powerful technology and the potential of its yoga and dance games.[181]Kotakuconsidered the device revolutionary upon first use but noted that games were sometimes unable to recognize gestures or had slow responses, concluding that Kinect is "not must-own yet, more like must-eventually own."[188]TechRadarpraised the voice control and saw a great deal of potential in the device whose lag and space requirements were identified as issues.[183]Gizmodoalso noted Kinect's potential and expressed curiosity in how more mainstream titles would utilize the technology.[189]Ars Technica'sreview expressed concern that the core feature of Kinect, its lack of a controller, would hamper development of games beyond those that have either stationary players or control the player's movement automatically.[190] The mainstream press also reviewed Kinect.USA Todaycompared it to the futuristic control scheme seen inMinority Report, stating that "playing games feels great" and giving the device 3.5 out of 4 stars.[182]David PoguefromThe New York Timespredicted players will feel a "crazy, magical, omigosh rush the first time you try the Kinect." Despite calling the motion tracking less precise thanWii's implementation, Pogue concluded that "Kinect’s astonishing technology creates a completely new activity that’s social, age-spanning and even athletic."[191]The Globe and Mailtitled Kinect as setting a "new standard for motion control." The slight input lag between making a physical movement and Kinect registering it was not considered a major issue with most games, and the review called Kinect "a good and innovative product," rating it 3.5 out of 4 stars.[192] Although featuring improved performance over the original Kinect, its successor has been subject to mixed responses. In its Xbox One review,Engadgetpraised Xbox One's Kinect functionality, such as face recognition login and improved motion tracking, but said that while the device was "magical", "every false positive or unrecognized [voice] command had us reaching for the controller."[193]The Kinect's inability to understand some accents in English was criticized.[194]Writing forTime, Matt Peckham described the device as being "chunky" in appearance, but that the facial recognition login feature was "creepy but equally sci-fi-future cool", and that the new voice recognition system was a "powerful, addictive way to navigate the console, and save for a few exceptions that seem to be smoothing out with use". However, its accuracy was found to be affected by background noise, and Peckham further noted that launching games using voice recognition required that the full title of the game be given rather than an abbreviated name that the console "ought to semantically understand", such asForza Motorsport 5rather than "Forza 5".[195] Prior to Xbox One's launch,privacyconcerns were raised over the new Kinect; critics showed concerns the device could be used forsurveillance, stemming from the originally announced requirements that Xbox One's Kinect be plugged in at all times, plus the initialalways-on DRMsystem that required the console to be connected to the internet to ensure continued functionality. Privacy advocates contended that the increased amount of data which could be collected with the new Kinect (such as a person's eye movements, heart rate, and mood) could be used fortargeted advertising. Reports also surfaced regarding recent Microsoftpatentsinvolving Kinect, such as a DRM system based on detecting the number of viewers in a room, and tracking viewing habits by awardingachievementsfor watching television programs andadvertising. While Microsoft stated that itsprivacy policy"prohibit[s] the collection, storage, or use of Kinect data for the purpose of advertising", critics did not rule out the possibility that these policies could be changed prior to the release of the console. Concerns were also raised that the device could also record conversations, as its microphone remains active at all times. In response to the criticism, a Microsoft spokesperson stated that users are "in control of when Kinect sensing is On, Off or Paused", will be provided with key privacy information and settings during the console's initial setup, and that user-generated content such as photos and videos "will not leave your Xbox One without your explicit permission."[49][50][51][52]Microsoft ultimately decided to reverse its decision to require Kinect usage on Xbox One, but the console still shipped with the device upon its launch in November 2013.[54] While announcing Kinect's discontinuation in an interview withFast Co. Designon October 25, 2017, Microsoft stated that 35 million units had been sold since its release.[10]24 million units of Kinect had been shipped by February 2013.[196]Having sold 8 million units in its first 60 days on the market, Kinect claimed theGuinness World Recordof being the "fastest selling consumer electronics device".[197][198][199][200]According to Wedbush analyst Michael Pachter, Kinect bundles accounted for about half of all Xbox 360 console sales in December 2010 and for more than two-thirds in February 2011.[201][202]More than 750,000 Kinect units were sold during the week of Black Friday 2011.[203][204] Kinect competed with severalmotion controllerson other home consoles, such asWii Remote,Wii Remote PlusandWii Balance Boardfor theWiiandWii U,PlayStation MoveandPlayStation Eyefor thePlayStation 3, andPlayStation Camerafor thePlayStation 4. While the Xbox 360 Kinect's controller-less nature enabled it to offer a motion-controlled experience different from the wand-based controls of the Wii and PlayStation Move, this has occasionally hindered developers from developing certain motion-controlled games that could target all three seventh-generation consoles and still provide the same experience regardless of console. Examples of seventh-generation motion-controlled games that were released on Wii and PlayStation 3, but had a version for Xbox 360 cancelled or ruled out from the start, due to issues with translating wand controls to the camera-based movement of the Kinect, includeDead Space: Extraction,[205]The Lord of the Rings: Aragorn's Quest[206]andPhineas and Ferb: Across the 2nd Dimension.[207]
https://en.wikipedia.org/wiki/Kinect
Inmathematicsandabstract algebra,group theorystudies thealgebraic structuresknown asgroups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such asrings,fields, andvector spaces, can all be seen as groups endowed with additionaloperationsandaxioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra.Linear algebraic groupsandLie groupsare two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such ascrystalsand thehydrogen atom, may be modelled bysymmetry groups. Thus group theory and the closely relatedrepresentation theoryhave many important applications inphysics,chemistry, andmaterials science. Group theory is also central topublic key cryptography.
https://en.wikipedia.org/wiki/List_of_group_theory_topics
Uncertain inferencewas first described byC. J. van Rijsbergen[1]as a way to formally define a query and document relationship inInformation retrieval. This formalization is alogical implicationwith an attached measure of uncertainty. Rijsbergen proposes that the measure ofuncertaintyof a documentdto a queryqbe the probability of its logical implication, i.e.: A user's query can be interpreted as a set of assertions about the desired document. It is the system's task toinfer, given a particular document, if the query assertions are true. If they are, the document is retrieved. In many cases the contents of documents are not sufficient to assert the queries. Aknowledge baseof facts and rules is needed, but some of them may be uncertain because there may be a probability associated to using them for inference. Therefore, we can also refer to this asplausible inference. Theplausibilityof an inferenced→q{\displaystyle d\to q}is a function of the plausibility of each query assertion. Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Sincedandqare both generated by users, they are error prone; thusd→q{\displaystyle d\to q}is uncertain. This will affect the plausibility of a given query. By doing this it accomplishes two things: Multimediadocuments, like images or videos, have different inference properties for each datatype. They are also different from text document properties. The framework of plausible inference allows us to measure and combine the probabilities coming from these different properties. Uncertain inference generalizes the notions ofautoepistemic logic, where truth values are either known or unknown, and when known, they are true or false. If we have a query of the form: where A, B and C are query assertions, then for a document D we want the probability: If we transform this into theconditional probabilityP((A∧B∧C)|D){\displaystyle P((A\wedge B\wedge C)|D)}and if the query assertions are independent we can calculate the overall probability of the implication as the product of the individual assertions probabilities. Croft and Krovetz[2]applied uncertain inference to an information retrieval system for office documents they calledOFFICER. In office documents the independence assumption is valid since the query will focus on their individual attributes. Besides analysing the content of documents one can also query about the author, size, topic or collection for example. They devised methods to compare document and query attributes, infer their plausibility and combine it into an overall rating for each document. Besides that uncertainty of document and query contents also had to be addressed. Probabilistic logic networksis a system for performing uncertain inference; crisp true/false truth values are replaced not only by a probability, but also by a confidence level, indicating the certitude of the probability. Markov logic networksallow uncertain inference to be performed; uncertainties are computed using themaximum entropy principle, in analogy to the way thatMarkov chainsdescribe the uncertainty offinite-state machines.
https://en.wikipedia.org/wiki/Uncertain_inference
Constrained writingis aliterary techniquein which the writer is bound by some condition that forbids certain things or imposes a pattern.[1] Constraints are very common inpoetry, which often requires the writer to use a particular verse form. Constraints on writing are common and can serve a variety of purposes. For example, a text may place restrictions on itsvocabulary, e.g.Basic English,copula-free text,defining vocabularyfor dictionaries, and other limited vocabularies for teachingEnglish as a second languageor to children. In poetry, formal constraints abound in both mainstream and experimental work. Familiar elements of poetry likerhymeandmeterare often applied as constraints. Well-established verse forms like thesonnet,sestina,villanelle,limerick, andhaikuare variously constrained by meter, rhyme, repetition, length, and other characteristics. Outside of established traditions, particularly in theavant-garde, writers have produced a variety of work under more severe constraints; this is often what the term "constrained writing" is specifically applied to. For example: TheOulipogroup is a gathering of writers who use such techniques. TheOutrapogroup usestheatrical constraints.[3] There are a number of constrained writing forms that are restricted by length, including: Notable examples ofconstrained comics:
https://en.wikipedia.org/wiki/Constrained_writing
Inmathematical logic,algebraic logicis the reasoning obtained by manipulating equations withfree variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description ofmodelsappropriate for the study of various logics (in the form of classes of algebras that constitute thealgebraic semanticsfor thesedeductive systems) and connected problems likerepresentationand duality. Well known results like therepresentation theorem for Boolean algebrasandStone dualityfall under the umbrella of classical algebraic logic (Czelakowski 2003). Works in the more recentabstract algebraic logic(AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using theLeibniz operator(Czelakowski 2003). A homogeneousbinary relationis found in thepower setofX×Xfor some setX, while aheterogeneous relationis found in the power set ofX×Y, whereX≠Y. Whether a given relation holds for two individuals is onebitof information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered byinclusion, and lattice of these sets becomes an algebra throughrelative multiplicationorcomposition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion."[1] Theconversionrefers to theconverse relationthat always exists, contrary to function theory. A given relation may be represented by alogical matrix; then the converse relation is represented by thetransposematrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained bymatrix multiplicationusing Boolean arithmetic. An example of calculus of relations arises inerotetics, the theory of questions. In the universe of utterances there arestatementsSandquestionsQ. There are two relationsπand α fromQtoS:qαaholds whenais a direct answer to questionq. The other relation,qπpholds whenpis apresuppositionof questionq. The converse relationπTruns fromStoQso that the compositionπTα is a homogeneous relation onS.[2]The art of putting the right question to elicit a sufficient answer is recognized inSocratic methoddialogue. The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relationRthat satisfies the formulaRTR⊆I,{\displaystyle R^{T}R\subseteq I,}whereIis the identity relation on the range ofR. The injective property corresponds to univalence ofRT{\displaystyle R^{T}}, or the formulaRRT⊆I,{\displaystyle RR^{T}\subseteq I,}where this timeIis the identity on the domain ofR. But a univalent relation is only apartial function, while a univalenttotal relationis afunction. The formula for totality isI⊆RRT.{\displaystyle I\subseteq RR^{T}.}Charles LoewnerandGunther Schmidtuse the termmappingfor a total, univalent relation.[3][4] The facility ofcomplementary relationsinspiredAugustus De MorganandErnst Schröderto introduceequivalencesusingR¯{\displaystyle {\bar {R}}}for the complement of relationR. These equivalences provide alternative formulas for univalent relations (RI¯⊆R¯{\displaystyle R{\bar {I}}\subseteq {\bar {R}}}), and total relations (R¯⊆RI¯{\displaystyle {\bar {R}}\subseteq R{\bar {I}}}). Therefore, mappings satisfy the formulaR¯=RI¯.{\displaystyle {\bar {R}}=R{\bar {I}}.}Schmidt uses this principle as "slipping below negation from the left".[5]For a mappingf,fA¯=fA¯.{\displaystyle f{\bar {A}}={\overline {fA}}.} Therelation algebrastructure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer[6]opened the frontier ofabstract algebraic logic.[7][8][9] Algebraic logic treatsalgebraic structures, oftenbounded lattices, as models (interpretations) of certainlogics, making logic a branch oforder theory. In algebraic logic: In the table below, the left column contains one or morelogicalor mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are eitherBoolean algebrasorproper extensionsthereof.Modaland othernonclassical logicsare typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyondfirst-order logicin at least some respects include: Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memorandaLeibnizwrote in the 1680s, some of which were published in the 19th century and translated into English byClarence Lewisin 1918.[10]: 291–305But nearly all of Leibniz's known work on algebraic logic was published only in 1903 afterLouis Couturatdiscovered it in Leibniz'sNachlass.Parkinson (1966)andLoemker (1969)translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors wereGeorge Boole[11]andAugustus De Morgan.[12]In 1870Charles Sanders Peircepublished the first of several works on thelogic of relatives.Alexander Macfarlanepublished hisPrinciples of the Algebra of Logic[13]in 1879, and in 1883,Christine Ladd, a student of Peirce atJohns Hopkins University, published "On the Algebra of Logic".[14]Logic turned more algebraic whenbinary relationswere combined withcomposition of relations. For setsAandB, arelationoverAandBis represented as a member of thepower setofA×Bwith properties described byBoolean algebra. The "calculus of relations"[9]is arguably the culmination of Leibniz's approach to logic. At theHochschule Karlsruhethe calculus of relations was described byErnst Schröder.[15]In particular he formulatedSchröder rules, though De Morgan had anticipated them with his Theorem K. In 1903Bertrand Russelldeveloped the calculus of relations andlogicismas his version of pure mathematics based on the operations of the calculus asprimitive notions.[16]The "Boole–Schröder algebra of logic" was developed atUniversity of California, Berkeleyin atextbookbyClarence Lewisin 1918.[10]He treated the logic of relations as derived from thepropositional functionsof two or more variables. Hugh MacColl,Gottlob Frege,Giuseppe Peano, andA. N. Whiteheadall shared Leibniz's dream of combiningsymbolic logic,mathematics, andphilosophy. Some writings byLeopold LöwenheimandThoralf Skolemon algebraic logic appeared after the 1910–13 publication ofPrincipia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations".[9] According toHelena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed thelogical matrixmethod. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic."[17] Brady (2000)discusses the rich historical connections between algebraic logic andmodel theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition.Alfred Tarski, the founder ofset theoreticmodel theory as a major branch of contemporary mathematical logic, also: In the practice of the calculus of relations,Jacques Riguetused the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of adifunctionalrelation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem ofN. M. Ferrersfollows from interpretation of thetransposeof a staircase. Riguet generatedrectangular relationsby taking theouter productof logical vectors; these contribute to thenon-enlargeable rectanglesofformal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized inLenzen (2004). To see how present-day work in logic andmetaphysicscan draw inspiration from, and shed light on, Leibniz's thought, seeZalta (2000). Historical perspective
https://en.wikipedia.org/wiki/Calculus_of_relations
Associationismis the idea thatmental processesoperate by theassociationof one mental state with its successor states.[1]It holds that all mental processes are made up of discrete psychological elements and their combinations, which are believed to be made up of sensations or simple feelings.[2]In philosophy, this idea is viewed as the outcome ofempiricismandsensationism.[3]The concept encompasses a psychological theory as well as comprehensive philosophical foundation and scientific methodology.[2] The idea is first recorded inPlatoandAristotle, especially with regard to the succession of memories. Particularly, the model is traced back to the Aristotelian notion that human memory encompasses all mental phenomena. The model was discussed in detail in the philosopher's work,Memory and Reminiscence.[4]This view was then widely embraced until the emergence of British associationism, which began withThomas Hobbes.[4] Members of the Associationist School, includingJohn Locke,David Hume,David Hartley,Joseph Priestley,James Mill,John Stuart Mill,Alexander Bain, andIvan Pavlov, asserted that the principle applied to all or most mental processes.[5] The phrase "association of ideas" was first used by John Locke in 1689. In chapter 33 ofAn Essay Concerning Human Understanding, which is entitled “Of the Association of Ideas″, he describes the ways that ideas can be connected to each other.[6]He writes, "Some of our ideas have a natural correspondence and connection with one another."[7] Although he believed that some associations were natural and justified, he believed that others were illogical, causing errors in judgment. He also explains that one can associate some ideas together based on their education and culture, saying, "there is another connection of ideas wholly owing to chance or custom".[6][7]The termassociationismlater became more prominent in psychology and the psychologists who subscribed to the idea became known as "the associationists".[6]Locke's view that the mind and body are two aspects of the same unified phenomenon can be traced back to Aristotle's ideas on the subject.[8] In his 1740 bookTreatise on Human NatureDavid Hume outlines three principles for ideas to be connected to each other: resemblance, continuity in time or place, and cause or effect.[9]He argues that the mind uses these principles, rather than reason, to traverse from idea to idea.[6]He writes “When the mind, therefore, passes from the idea or impression of one object to the idea or belief of another, it is not determined by reason, but by certain principles, which associate together the ideas of these objects, and unite them in the imagination.”[9]These connections are formed in the mind by observation and experience. Hume does not believe that any of these associations are “necessary’ in a sense that ideas or object are truly connected, instead he sees them as mental tools used for creating a useful mental representation of the world.[6] Later members of the school developed very specific principles elaborating how associations worked and even a physiological mechanism bearing no resemblance to modernneurophysiology.[10]For a fuller explanation of the intellectual history of associationism and the "Associationist School", seeAssociation of Ideas. Associationism is often concerned with middle-level to higher-level mental processes such aslearning.[8]For instance, the thesis, antithesis, and synthesis are linked in one's mind through repetition so that they become inextricably associated with one another.[8]Among the earliest experiments that tested the applications of associationism, involveHermann Ebbinghaus' work. He was considered the first experimenter to apply the associationist principles systematically, and used himself as subject to study and quantify the relationship between rehearsal and recollection of material.[8] Some of the ideas of the Associationist School also anticipated the principles ofconditioningand its use inbehavioral psychology.[5]Bothclassical conditioningandoperant conditioninguse positive and negativeassociationsas means of conditioning.[10]
https://en.wikipedia.org/wiki/Associationism
Nominal TAMis the indication oftense–aspect–moodbyinflectinga noun, rather than a verb. Inclausal nominal TAM, the noun indicates TAM information about theclause(as opposed to the noun phrase). Whether or not a particular language can best be understood as having clausal nominal TAM can be controversial, and there are various borderline cases. A language that can indicate tense by attaching a verbalcliticto a noun (such as the -'llclitic in English) is not generally regarded as using nominal TAM. Various languages have been shown to have clausal nominal TAM.[1]In theNiger-Congo languageSupyire, the form of the first person and second pronouns reflects whether the clause has declarative or non-declarative mood. In theGǀwi languageof Botswana, subject pronouns reflect the imperative or non-imperative mood of the clause (while the verb itself does not). In theChamicuro languageof Peru, the definite article accompanying the subject or object of a clause indicates either past or non-past tense. In thePitta Pitta languageof Australia, the mandatory case marking system differs depending on the tense of the clause. Other languages exhibiting clausal nominal TAM includeLardil(Australia), Gurnu (Australia),Yag Dii(Cameroon),Sahidic Coptic(Egypt),Gusiilay(Niger-Congo),Iai(Oceania),Tigak(Oceania), andGuaymi(Panama and Costa Rica). In theGuaranilanguage of Paraguay, nouns can optionally take several different past and future markers to express ideas[2]such as "our old house (the one we no longer live in)", "the abandoned car", "what was once a bridge", "bride-to-be" or even "my ex-future-wife," or rather, "the woman who at one point was going to be my wife." Although verbal clitics such as -'llin English are attached to nouns and indicate TAM information, they are not really examples of nominal TAM because they arecliticsrather thaninflectionsand therefore not part of the noun at all.[3]This is easily seen in sentences where the clitic is attached to another part of speech, such as "The one you want'll be in the shed". Another way to tell the difference is to consider the following hypothetical dialogue: The speaker cannot emphasise the future time by placing voice stress onshe'll, and so instead uses the expanded phraseshe will. This is characteristic of clitics as opposed to inflections (i.e. clitics cannot be emphasised by placing voice stress on the word to which they are attached). The significance of this can be seen by comparison with a second hypothetical dialogue, using the English negative suffix -n't(which is best understood as an inflection rather than a clitic): In this case the speaker could choose to sayisn'trather thanis not. Even though the stress then falls on the syllableIS, the meaning of the sentence is understood as emphasising theNOT. This indicates thatisn'tis one inflected word rather than a word with a clitic attached.
https://en.wikipedia.org/wiki/Nominal_TAM
Inconvex analysisand thecalculus of variations, both branches ofmathematics, apseudoconvex functionis afunctionthat behaves like aconvex functionwith respect to finding itslocal minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positivedirectional derivative. The property must hold in all of the function domain, and not only for nearby points. Consider adifferentiablefunctionf:X⊆Rn→R{\displaystyle f:X\subseteq \mathbb {R} ^{n}\rightarrow \mathbb {R} }, defined on a (nonempty)convexopen setX{\displaystyle X}of the finite-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}. This function is said to bepseudoconvexif the following property holds:[1] Equivalently: Here∇f{\displaystyle \nabla f}is thegradientoff{\displaystyle f}, defined by:∇f=(∂f∂x1,…,∂f∂xn).{\displaystyle \nabla f=\left({\frac {\partial f}{\partial x_{1}}},\dots ,{\frac {\partial f}{\partial x_{n}}}\right).} Note that the definition may also be stated in terms of thedirectional derivativeoff{\displaystyle f}, in the direction given by the vectorv=y−x{\displaystyle v=y-x}. This is because, asf{\displaystyle f}is differentiable, this directional derivative is given by: Every convex function is pseudoconvex, but the converse is not true. For example, the functionf(x)=x+x3{\displaystyle f(x)=x+x^{3}}is pseudoconvex but not convex. Similarly, any pseudoconvex function isquasiconvex; but the converse is not true, since the functionf(x)=x3{\displaystyle f(x)=x^{3}}is quasiconvex but not pseudoconvex. This can be summarized schematically as: To see thatf(x)=x3{\displaystyle f(x)=x^{3}}is not pseudoconvex, consider its derivative atx=0{\displaystyle x=0}:f′(0)=0{\displaystyle f^{\prime }(0)=0}. Then, iff(x)=x3{\displaystyle f(x)=x^{3}}was pseudoconvex, we should have: In particular it should be true fory=−1{\displaystyle y=-1}. But it is not, as:f(−1)=(−1)3=−1<f(0)=0{\displaystyle f(-1)=(-1)^{3}=-1<f(0)=0}. For any differentiable function, we have theFermat's theoremnecessary condition of optimality, which states that: iff{\displaystyle f}has a local minimum atx∗{\displaystyle x^{*}}in anopendomain, thenx∗{\displaystyle x^{*}}must be astationary pointoff{\displaystyle f}(that is:∇f(x∗)=0{\displaystyle \nabla f(x^{*})=0}). Pseudoconvexity is of great interest in the area ofoptimization, because the converse is also true for any pseudoconvex function. That is:[2]ifx∗{\displaystyle x^{*}}is astationary pointof a pseudoconvex functionf{\displaystyle f}, thenf{\displaystyle f}has a global minimum atx∗{\displaystyle x^{*}}. Note also that the result guarantees a global minimum (not only local). This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function: This function is not pseudoconvex, but it is quasiconvex. Also, the pointx=0{\displaystyle x=0}is a critical point off{\displaystyle f}, asf′(0)=0{\displaystyle f^{\prime }(0)=0}. However,f{\displaystyle f}does not have a global minimum atx=0{\displaystyle x=0}(not even a local minimum). Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function:f(x)=x3+x{\displaystyle f(x)=x^{3}+x}, whose derivative is always positive:f′(x)=3x2+1>0,∀x∈R{\displaystyle f^{\prime }(x)=3x^{2}+1>0,\,\forall \,x\in \mathbb {R} }. An example of a function that is pseudoconvex, but not convex, is:f(x)=x2x2+k,k>0.{\displaystyle f(x)={\frac {x^{2}}{x^{2}+k}},\,k>0.}The figure shows this function for the case wherek=0.2{\displaystyle k=0.2}. This example may be generalized to two variables as: The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex: The figure shows this function for the case wherek=0.5,p=0.6{\displaystyle k=0.5,p=0.6}. As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable atx=0{\displaystyle x=0}. The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows.[3]Given any functionf:X→R{\displaystyle f:X\rightarrow \mathbb {R} }, we can define the upperDini derivativeoff{\displaystyle f}by: whereuis anyunit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of thesubdifferential∂f{\displaystyle \partial f}as follows: where[x,y]{\displaystyle [x,y]}denotes the line segment adjoiningxandy. Apseudoconcave functionis a function whose negative is pseudoconvex. Apseudolinear functionis a function that is both pseudoconvex and pseudoconcave.[4]For example,linear–fractional programshave pseudolinearobjective functionsandlinear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of thesimplex algorithm(ofGeorge B. Dantzig).[5][6][7] Given a vector-valued functionη{\displaystyle \eta }, there is a more general notion ofη{\displaystyle \eta }-pseudoconvexity[8][9]andη{\displaystyle \eta }-pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case whenη(x,y)=y−x{\displaystyle \eta (x,y)=y-x}.
https://en.wikipedia.org/wiki/Pseudoconvex_function
Abinary clockis aclockthat displays the time of day in abinaryformat. Originally, such clocks showedeach decimal digitofsexagesimaltime as a binary value, but presently binary clocks also exist which display hours, minutes, and seconds as binary numbers. Most binary clocks aredigital, althoughanalogvarieties exist. True binary clocks also exist, which indicate the time by successively halving the day, instead of using hours, minutes, or seconds. Similar clocks, based onGray codedbinary, also exist. Most common binary clocks use six columns ofLEDsto representzerosandones. Each column represents a single decimal digit, a format known asbinary-coded decimal(BCD). The bottom row in each column represents 1 (or 20), with each row above representing higher powers of two, up to 23(or 8). To read each individual digit in the time, the user adds the values that each illuminatedLEDrepresents, then reads these from left to right. The first two columns represent thehour, the next two represent theminuteand the last two represent thesecond. Since zero digits are not illuminated, the positions of each digit must be memorized if the clock is to be usable in the dark. Binary clocks that display time in binary-codedsexagesimalalso exist. Instead of representing each digit of traditional sexagesimal time with one binary number, each component of traditional sexagesimal time is represented with one binary number, that is, using up to 6 bits instead of only 4. For 24-hour binary-coded sexagesimal clocks, there are 11 or 17 LED lights to show the time. There are 5 LEDs to show the hours, there are 6 LEDs to show the minutes, and there are 6 LEDs to show the seconds (which aren't used in clocks with 11 LED lights). A format exists also where hours, minutes and seconds are shown on three lines instead of columns as binary numbers.[1] Less commonly, the day could be divided in binary fractions, such as ½ day, ¼ day, etc. The clock would show the time in 16 bits, where the smallest unit would be exactly1⁄65536day, or675⁄512(about 1.318) seconds.[2]An analog format also exists of this type.[3]However, it is much easier to write and express this in hexadecimal, which would behexadecimal time.
https://en.wikipedia.org/wiki/Binary_time
In theStandard Generalized Markup Language(SGML), anentityis aprimitivedata type, which associates astringwith either a unique alias (such as a user-specified name) or an SGMLreserved word(such as#DEFAULT). Entities are foundational to the organizational structure and definition of SGML documents. The SGML specification defines numerousentity types, which are distinguished by keyword qualifiers and context. An entity string value may variously consist ofplain text, SGML tags, and/or references to previously defined entities. Certain entity types may also invoke external documents. Entities arecalled by reference. Entities are classified as general or parameter: Entities are also further classified as parsed or unparsed: Aninternal entityhas a value that is either aliteralstring, or a parsed string comprising markup and entities defined in the same document (such as aDocument Type Declarationor subdocument). In contrast, anexternal entityhas adeclarationthat invokes an external document, thereby necessitating the intervention of anentity managerto resolve the external document reference. An entity declaration may have a literal value, or may have some combination of an optionalSYSTEMidentifier, which allows SGML parsers to process an entity's string referent as a resource identifier, and an optionalPUBLICidentifier, which identifies the entity independent of any particular representation. InXML, a subset ofSGML, an entity declaration may not have aPUBLICidentifier without aSYSTEMidentifier. When an external entity references a complete SGML document, it is known in the calling document as anSGML document entity. An SGML document is a text document with SGML markup defined in an SGML prologue (i.e., the DTD and subdocuments). A complete SGML document comprises not only the document instance itself, but also the prologue and, optionally, the SGML declaration (which defines the document's markup syntax and declares thecharacter encoding).[1] An entity is defined via anentity declarationin a document'sdocument type definition(DTD). For example: This DTD markup declares the following: Names for entities must follow the rules forSGML names, and there are limitations on where entities can be referenced. Parameter entities are referenced by placing the entity name between%and;. Parsed general entities are referenced by placing the entity name between "&" and ";". Unparsed entities are referenced by placing the entity name in the value of an attribute declared as type ENTITY. The general entities from the example above might be referenced in a document as follows: When parsed, this document would be reported to the downstream application the same as if it has been written as follows, assuming thehello.txtfile contains the textSalutations: A reference to an undeclared entity is an error unless a default entity has been defined. For example: Additional markup constructs and processor options may affect whether and how entities are processed. For example, a processor may optionally ignore external entities. Standard entity sets for SGML and some of its derivatives have been developed asmnemonicdevices, to ease document authoring when there is a need to use characters that are not easily typed or that are not widely supported by legacy character encodings. Each such entity consists of just one character from theUniversal Character Set. Although any character can be referenced using anumeric character reference, acharacter entity referenceallows characters to be referenced by name instead ofcode point. For example,HTML 4has 252 built-in character entities that do not need to be explicitly declared, whileXMLhas five.XHTMLhas the same five as XML, but if its DTDs are explicitly used, then it has 253 (&apos;being the extra entity beyond those in HTML 4).
https://en.wikipedia.org/wiki/SGML_entity
Random indexingis adimensionality reductionmethod and computational framework fordistributional semantics, based on the insight that very-high-dimensionalvector space modelimplementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately. This is the original point of therandom projectionapproach to dimension reduction first formulated as theJohnson–Lindenstrauss lemma, andlocality-sensitive hashinghas some of the same starting points. Random indexing, as used in representation of language, originates from the work ofPentti Kanerva[1][2][3][4][5]onsparse distributed memory, and can be described as an incremental formulation of a random projection.[6] It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces.[7]In Euclidean spaces, random projections are elucidated using the Johnson–Lindenstrauss lemma.[8] The TopSig technique[9]extends the random indexing model to producebit vectorsfor comparison with theHamming distancesimilarity function. It is used for improving the performance ofinformation retrievalanddocument clustering. In a similar line of research, Random Manhattan Integer Indexing (RMII)[10]is proposed for improving the performance of the methods that employ theManhattan distancebetween text units. Many random indexing methods primarily generate similarity from co-occurrence of items in a corpus. Reflexive Random Indexing (RRI)[11]generates similarity from co-occurrence and from shared occurrence with other items.
https://en.wikipedia.org/wiki/Random_indexing
Shellshock, also known asBashdoor,[1]is a family ofsecurity bugs[2]in theUnixBashshell, the first of which was disclosed on 24 September 2014. Shellshock could enable an attacker to cause Bash toexecute arbitrary commandsand gain unauthorized access[3]to many Internet-facing services, such as web servers, that use Bash to process requests. On 12 September 2014, Stéphane Chazelas informed Bash's maintainer Chet Ramey[1]of his discovery of the original bug, which he called "Bashdoor". Working with security experts, Mr. Chazelas developed apatch[1](fix) for the issue, which by then had been assigned the vulnerability identifierCVE-2014-6271.[4]The existence of the bug was announced to the public on 2014-09-24, when Bash updates with the fix were ready for distribution.[5] The bug Chazelas discovered caused Bash to unintentionally execute commands when the commands are concatenated to the end offunction definitionsstored in the values ofenvironment variables.[1][6]Within days of its publication, a variety of related vulnerabilities were discovered (CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186and CVE-2014-7187). Ramey addressed these with a series of further patches.[7][8] Attackers exploited Shellshock within hours of the initial disclosure by creatingbotnetsof compromised computers to performdistributed denial-of-service attacksandvulnerability scanning.[9][10]Security companies recorded millions of attacks and probes related to the bug in the days following the disclosure.[11][12] Because of the potential to compromise millions of unpatched systems, Shellshock was compared to theHeartbleedbug in its severity.[3][13] The Shellshock bug affectsBash, a program that variousUnix-based systems use to execute command lines and command scripts. It is often installed as the system's defaultcommand-line interface. Analysis of thesource codehistory of Bash shows the bug was introduced on 5 August 1989, and released in Bash version 1.03 on 1 September 1989.[14][15][16] Shellshock is anarbitrary code executionvulnerability that offers a way for users of a system to execute commands that should be unavailable to them. This happens through Bash's "function export" feature, whereby one Bashprocesscan share command scripts with other Bash processes that it executes.[17]This feature is implemented by encoding the scripts in a table that is shared between the processes, known as theenvironment variablelist. Each new Bash process scans this table for encoded scripts, assembles each one into a command that defines that script in the new process, and executes that command.[18]The new process assumes that the scripts found in the list come from another Bash process, but it cannot verify this, nor can it verify that the command that it has built is a properly formed script definition. Therefore, an attacker can execute arbitrary commands on the system or exploit other bugs that may exist in Bash's command interpreter, if the attacker has a way to manipulate the environment variable list and then cause Bash to run. At the time the bug was discovered, Bash was installed onmacOSand manyLinuxoperating systems as the main command interpreter, so that any program that used thesystemfunction to run any other program would use Bash to do so. The presence of the bug was announced to the public on 2014-09-24, when Bash updates with the fix were ready for distribution,[5]though it took some time for computers to be updated to close the potential security issue. Within an hour of the announcement of the Bash vulnerability, there were reports of machines being compromised by the bug. By 25 September 2014,botnetsbased on computers compromised with exploits based on the bug were being used by attackers fordistributed denial-of-service(DDoS) attacks andvulnerability scanning.[9][10][19]Kaspersky Labsreported that machines compromised in an attack, dubbed "Thanks-Rob", were conducting DDoS attacks against three targets, which they did not identify.[9]On 26 September 2014, a Shellshock-related botnet dubbed "wopbot" was reported, which was being used for a DDoS attack againstAkamai Technologiesand to scan theUnited States Department of Defense.[10] On 26 September, the security firmIncapsulanoted 17,400 attacks on more than 1,800 web domains, originating from 400 unique IP addresses, in the previous 24 hours; 55% of the attacks were coming from China and the United States.[11]By 30 September, the website performance firmCloudFlaresaid it was tracking approximately 1.5 million attacks and probes per day related to the bug.[12] On 6 October, it was widely reported thatYahoo!servers had been compromised in an attack related to the Shellshock issue.[20][21]Yet the next day, it was denied that it had beenShellshockthat specifically had allowed these attacks.[22] The maintainer of Bash was warned about the first discovery of the bug on 2014-09-12; a fix followed soon.[1]A few companies and distributors were informed before the matter was publicly disclosed on 2014-09-24 with CVE identifierCVE-2014-6271.[4][5]However, after the release of the patch there were subsequent reports of different, yet related vulnerabilities.[28] On 26 September 2014, two open-source contributors, David A. Wheeler and Norihiro Tanaka, noted that there were additional issues, even after patching systems using the most recently available patches. In an email addressed to the oss-sec and bash-bug mailing lists, Wheeler wrote: "This patch just continues the'whack-a-mole'[sic] job of fixing parsing errors that began with the first patch. Bash's parser is certain [to] have many many many other vulnerabilities".[29] On 27 September 2014,Michał ZalewskifromGoogle Inc.announced his discovery of other Bash vulnerabilities,[7]one based upon the fact that Bash is typically compiled withoutaddress space layout randomization.[30]On 1 October, Zalewski released details of the final bugs and confirmed that a patch by Florian Weimer fromRed Hatposted on 25 September does indeed prevent them. He has done that using afuzzingtechnique with the aid of software utility known asamerican fuzzy lop.[31] This original form of the vulnerability (CVE-2014-6271) involves a specially crafted environment variable containing an exported function definition, followed by arbitrary commands. Bash incorrectly executes the trailing commands when it imports the function.[32]The vulnerability can be tested with the following command: In systems affected by the vulnerability, the above commands will display the word "vulnerable" as a result of Bash executing the command"echo vulnerable", which was embedded into the specially crafted environment variable named"x".[8][33] Discovered byMichał Zalewski,[7][30][34]the vulnerabilityCVE-2014-6277, which relates to the parsing of function definitions in environment variables by Bash, can cause asegfault.[35] Also discovered byMichał Zalewski,[35][36]this bug (CVE-2014-6278) relates to the parsing of function definitions in environment variables by Bash. On the same day the original vulnerability was published,Tavis Ormandydiscovered this related bug (CVE-2014-7169),[24]which is demonstrated in the following code: On a vulnerable system, this would execute the command "date" unintentionally.[24] Here is an example of a system that has a patch for CVE-2014-6271 but not CVE-2014-7169: The system displays syntax errors, notifying the user that CVE-2014-6271 has been prevented, but still writes a file named 'echo', into the working directory, containing the result of the 'date' call. A system patched for both CVE-2014-6271 and CVE-2014-7169 will simply echo the word "date" and the file "echo" willnotbe created, as shown below: Florian Weimer and Todd Sabin found this bug (CVE-2014-7186),[8][31]which relates to anout-of-bounds memory access errorin the Bash parser code.[37] An example of the vulnerability, which leverages the use of multiple "<<EOF" declarations (nested"here documents"): A vulnerable system will echo the text "CVE-2014-7186 vulnerable, redir_stack". Also found by Florian Weimer,[8]CVE-2014-7187is anoff-by-one errorin the Bash parser code, allowing out-of-bounds memory access.[38] An example of the vulnerability, which leverages the use of multiple "done" declarations: A vulnerable system will echo the text "CVE-2014-7187 vulnerable, word_lineno". This test requires a shell that supportsbrace expansion.[39] Until 24 September 2014, Bash maintainer Chet Ramey provided a patch version bash43-025 of Bash 4.3 addressing CVE-2014-6271,[40]which was already packaged by distribution maintainers. On 24 September, bash43-026 followed, addressing CVE-2014-7169.[41]Then CVE-2014-7186 was discovered. Florian Weimer fromRed Hatposted some patch code for this "unofficially" on 25 September,[42]which Ramey incorporated into Bash as bash43-027.[43][44]—These patches providedsource codeonly, helpful only for those who know how tocompile("rebuild") a new Bashbinary executablefile from the patch file and remaining source code files. The patches added a variable name prefix when functions are exported; this prevented arbitrary variables from triggering the vulnerability and enabled other programs to remove Bash functions from the environment. The next day, Red Hat officially presented according updates forRed Hat Enterprise Linux,[45][46]after another day forFedora 21.[47]Canonical Ltd.presented updates for itsUbuntuLong Term Supportversions on Saturday, 27 September;[48]on Sunday, there were updates forSUSE Linux Enterprise.[49]The following Monday and Tuesday at the end of the month,Mac OS Xupdates appeared.[50][51] On 1 October 2014,Michał ZalewskifromGoogle Inc.finally stated that Weimer's code and bash43-027 had fixed not only the first three bugs but even the remaining three that were published after bash43-027, including his own two discoveries.[31]This means that after the earlier distribution updates, no other updates have been required to cover all the six issues.[46] All of them have also been covered for theIBMHardware Management Console.[27]
https://en.wikipedia.org/wiki/Shellshock_(software_bug)
In mathematics, and particularly infunctional analysis,Fichera's existence principleis an existence and uniqueness theorem for solution offunctional equations, proved byGaetano Ficherain 1954.[1]More precisely, given a generalvector spaceVand twolinear mapsfrom itontotwoBanach spaces, the principle states necessary and sufficient conditions for alinear transformationbetween the twodualBanach spaces to be invertible for every vector inV.[2]
https://en.wikipedia.org/wiki/Fichera%27s_existence_principle
Auniform resource locator(URL), colloquially known as anaddresson theWeb,[1]is a reference to aresourcethat specifies its location on acomputer networkand a mechanism for retrieving it. A URL is a specific type ofUniform Resource Identifier(URI),[2][3]although many people use the two terms interchangeably.[4][a]URLs occur most commonly to referenceweb pages(HTTP/HTTPS) but are also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications. Mostweb browsersdisplay the URL of a web page above the page in anaddress bar. A typical URL could have the formhttp://www.example.com/index.html, which indicates a protocol (http), ahostname(www.example.com), and a file name (index.html). Uniform Resource Locators were defined inRFC1738in 1994 byTim Berners-Lee, the inventor of theWorld Wide Web, and the URI working group of theInternet Engineering Task Force(IETF),[7]as an outcome of collaboration started at the IETF Living Documentsbirds of a feathersession in 1992.[7][8] The format combines the pre-existing system ofdomain names(created in 1985) withfile pathsyntax, whereslashesare used to separatedirectoryandfilenames. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//).[9] Berners-Lee later expressed regret at the use of dots to separate the parts of thedomain namewithinURIs, wishing he had used slashes throughout,[9]and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary.[10] EarlyWorldWideWebcollaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers. An early (1993) draft of the HTML Specification[11]referred to "Universal" Resource Locators. This was dropped some time between June 1994 (RFC1630) and October 1994 (draft-ietf-uri-url-08.txt).[12]In his bookWeaving the Web, Berners-Lee emphasizes his preference for the original inclusion of "universal" in the expansion rather than the word "uniform", to which it was later changed, and he gives a brief account of the contention that led to the change. Every HTTP URL conforms to the syntax of a generic URI. The URI generic syntax consists of fivecomponentsorganized hierarchically in order of decreasing significance from left to right:[13]: §3 A component isundefinedif it has an associated delimiter and the delimiter does not appear in the URI; the scheme and path components are always defined.[13]: §5.2.1A component isemptyif it has no characters; the scheme component is always non-empty.[13]: §3 The authority component consists ofsubcomponents: This is represented in asyntax diagramas: The URI comprises: A web browser will usuallydereferencea URL by performing anHTTPrequest to the specified host, by default on port number 80. URLs using thehttpsscheme require that requests and responses be made over asecure connection to the website. Internet users are distributed throughout the world using a wide variety of languages and alphabets, and expect to be able to create URLs in their own local alphabets. AnInternationalized Resource Identifier(IRI) is a form of URL that includesUnicodecharacters. All modern browsers support IRIs. The parts of the URL requiring special treatment for different alphabets are the domain name and path.[18][19] The domain name in the IRI is known as anInternationalized Domain Name(IDN). Web and Internet software automatically convert the domain name intopunycodeusable by theDomain Name System; for example, the Chinese URLhttp://例子.卷筒纸becomeshttp://xn--fsqu00a.xn--3lr804guic/. Thexn--indicates that the character was not originallyASCII.[20] The URL path name can also be specified by the user in the local writing system. If not already encoded, it is converted toUTF-8, and any characters not part of the basic URL character set are escaped ashexadecimalusingpercent-encoding; for example, the Japanese URLhttp://example.com/引き割り.htmlbecomeshttp://example.com/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html. The target computer decodes the address and displays the page.[18] Protocol-relative links (PRL), also known as protocol-relative URLs (PRURL), are URLs that have no protocol specified. For example,//example.comwill use the protocol of the current page, typically HTTP or HTTPS.[21][22]
https://en.wikipedia.org/wiki/URL
Atemporal paradox,time paradox, ortime travel paradox, is aparadox, an apparent contradiction, or logical contradiction associated with the idea oftime travelor other foreknowledge of the future. While the notion of time travel to the future complies with the current understanding of physics via relativistictime dilation, temporal paradoxes arise from circumstances involving hypothetical time travel to the past – and are often used to demonstrate its impossibility. Temporal paradoxes fall into three broad groups: bootstrap paradoxes, consistency paradoxes, and Newcomb's paradox.[1]Bootstrap paradoxes violate causality by allowing future events to influence the past and cause themselves, or "bootstrapping", which derives from the idiom "pull oneself up by one's bootstraps."[2][3]Consistency paradoxes, on the other hand, are those where future events influence the past to cause an apparent contradiction, exemplified by thegrandfather paradox, where a person travels to the past to prevent the conception of one of their ancestors, thus eliminating all the ancestor's descendants.[4]Newcomb's paradoxstems from the apparent contradictions that stem from the assumptions of bothfree willand foreknowledge of future events. All of these are sometimes referred to individually as "causal loops." The term "time loop" is sometimes referred to as a causal loop,[2]but although they appear similar, causal loops are unchanging and self-originating, whereas time loops are constantly resetting.[5] A bootstrap paradox, also known as aninformation loop, aninformation paradox,[6]anontological paradox,[7]or a "predestination paradox" is a paradox of time travel that occurs when any event, such as an action, information, an object, or a person, ultimately causes itself, as a consequence of eitherretrocausalityortime travel.[8][9][10][11] Backward time travel would allow information, people, or objects whose histories seem to "come from nowhere".[8]Such causally looped events then exist inspacetime, but their origin cannot be determined.[8][9]The notion of objects or information that are "self-existing" in this way is often viewed as paradoxical.[9][6][12]A notable example occurs in the 1958science fictionshort story"—All You Zombies—", byRobert A. Heinlein, wherein the main character, anintersexindividual, becomes both their own mother and father; the 2014 filmPredestinationis based on the story. Allen Everett gives the movieSomewhere in Timeas an example involving an object with no origin: an old woman gives a watch to a playwright who later travels back in time and meets the same woman when she was young, and shows her the watch that she will later give to him.[6]An example of information which "came from nowhere" is in the movieStar Trek IV: The Voyage Home, in which a 23rd-century engineer travels back in time, and gives the formula fortransparent aluminumto the 20th-century engineer who supposedly invented it. Smeenk uses the term "predestination paradox" to refer specifically to situations in which a time traveler goes back in time to try to prevent some event in the past.[7] The "predestination paradox" is a concept in time travel and temporal mechanics, often explored in science fiction. It occurs when a future event is the cause of a past event, which in turn becomes the cause of the future event, forming a self-sustaining loop in time. This paradox challenges conventional understandings of cause and effect, as the events involved are both the origin and the result of each other. A notable example is found in the TV seriesDoctor Who, where acharacter saves her father in the past, fulfilling a memory he had shared with her as a child about a strange woman having saved his life. The predestination paradox raises philosophical questions about free will, determinism, and the nature of time itself. It is commonly used as a narrative device in fiction to highlight the interconnectedness of events and the inevitability of certain outcomes. The consistency paradox or grandfather paradox occurs when the past is changed in any way that directly negates the conditions required for the time travel to occur in the first place, thus creating a contradiction. A common example given is traveling to the past and preventing the conception of one's ancestors (such as causing the death of the ancestor's parent beforehand), thus preventing the conception of oneself. If the traveler were not born, then it would not be possible to undertake such an act in the first place; therefore, the ancestor proceeds to beget the traveler's next-generation ancestor and secure the line to the traveler. There is no predicted outcome to this scenario.[8]Consistency paradoxes occur whenever changing the past is possible.[9]A possible resolution is that a time travellercando anything thatdidhappen, butcannotdo anything thatdid nothappen. Doing something that did not happen results in a contradiction.[8]This is referred to as theNovikov self-consistency principle. The grandfather paradox encompasses any change to the past,[13]and it is presented in many variations, including killing one's past self.[14][15]Both the "retro-suicide paradox" and the "grandfather paradox" appeared in letters written intoAmazing Storiesin the 1920s.[16]Another variant of the grandfather paradox is the "Hitler paradox" or "Hitler's murder paradox", in which the protagonist travels back in time to murderAdolf Hitlerbefore he can rise to power in Germany, thus preventingWorld War IIand theHolocaust. Rather than necessarily physically preventing time travel, the action removes anyreasonfor the travel, along with any knowledge that the reason ever existed.[17] Physicist John Garrison et al. give a variation of the paradox of an electronic circuit that sends a signal through a time machine to shut itself off, and receives the signal before it sends it.[18][19] Newcomb's paradox is athought experimentshowing an apparent contradiction between theexpected utilityprinciple and thestrategic dominanceprinciple.[20]The thought experiment is often extended to explorecausalityand free will by allowing for "perfect predictors": if perfect predictors of the future exist, for example if time travel exists as a mechanism for making perfect predictions[how?], then perfect predictions appear to contradict free will because decisions apparently made with free will are already known to the perfect predictor[clarification needed].[21][22]Predestinationdoes not necessarily involve asupernaturalpower, and could be the result of other "infallible foreknowledge" mechanisms.[23]Problems arising from infallibility and influencing the future are explored in Newcomb's paradox.[24] Even without knowing whether time travel to the past is physically possible, it is possible to show usingmodal logicthat changing the past results in a logical contradiction. If it is necessarily true that the past happened in a certain way, then it is false and impossible for the past to have occurred in any other way. A time traveler would not be able to change the past from the way itis,but would only act in a way that is already consistent with whatnecessarilyhappened.[25][26] Consideration of the grandfather paradox has led some to the idea that time travel is by its very nature paradoxical and therefore logically impossible. For example, the philosopherBradley Dowdenmade this sort of argument in the textbookLogical Reasoning, arguing that the possibility of creating a contradiction rules out time travel to the past entirely. However, some philosophers and scientists believe that time travel into the past need not be logically impossible provided that there is no possibility of changing the past,[13]as suggested, for example, by theNovikov self-consistency principle. Dowden revised his view after being convinced of this in an exchange with the philosopherNorman Swartz.[27] A recent proposed resolution argues that if time is not an inherent property of the universe but is insteademergentfrom the laws ofentropy, as some modern theories suggest,[28][29]then it presents a natural solution to the Grandfather Paradox.[30]In this framework, "time travel" is reinterpreted not as movement along a linear continuum but as a reconfiguration of the present state of the universe to match a prior entropic configuration. Because the original chronological sequence—including events like the time traveler’s birth—remains preserved in the universe’s irreversible entropic progression, actions within the reconfigured state cannot alter the causal history that produced the traveler. This avoids paradoxes by treating time as a thermodynamic artifact rather than a mutable dimension. Consideration of the possibility of backward time travel in a hypothetical universe described by aGödel metricled famed logicianKurt Gödelto assert that time might itself be a sort of illusion.[31][32]He suggests something along the lines of theblock timeview, in which time is just another dimension like space, with all events at all times being fixed within this four-dimensional "block".[citation needed] Sergey Krasnikovwrites that these bootstrap paradoxes – information or an object looping through time – are the same; the primary apparent paradox is a physical system evolving into a state in a way that is not governed by its laws.[33]: 4He does not find these paradoxical and attributes problems regarding the validity of time travel to other factors in the interpretation of general relativity.[33]: 14–16 A 1992 paper by physicists Andrei Lossev andIgor Novikovlabeled such items without origin asJinn, with the singular termJinnee.[34]: 2311–2312This terminology was inspired by theJinnof theQuran, which are described as leaving no trace when they disappear.[35]: 200–203Lossev and Novikov allowed the term "Jinn" to cover both objects and information with the reflexive origin; they called the former "Jinn of the first kind", and the latter "Jinn of the second kind".[6][34]: 2315–2317[35]: 208They point out that an object making circular passage through time must be identical whenever it is brought back to the past, otherwise it would create an inconsistency; thesecond law of thermodynamicsseems to require that the object tends to a lower energy state throughout its history, and such objects that are identical in repeating points in their history seem to contradict this, but Lossev and Novikov argued that since the second law only requires entropy to increase inclosedsystems, a Jinnee could interact with its environment in such a way as to regain "lost" entropy.[6][35]: 200–203They emphasize that there is no "strict difference" between Jinn of the first and second kind.[34]: 2320Krasnikov equivocates between "Jinn", "self-sufficient loops", and "self-existing objects", calling them "lions" or "looping or intruding objects", and asserts that they are no less physical than conventional objects, "which, after all, also could appear only from either infinity or a singularity."[33]: 8–9 The self-consistency principle developed byIgor Dmitriyevich Novikov[36]: p. 42 note 10expresses one view as to how backwardtime travelwould be possible without the generation of paradoxes. According to this hypothesis, even thoughgeneral relativitypermits someexact solutionsthat allow fortime travel[37]that containclosed timelike curvesthat lead back to the same point in spacetime,[38]physics in or nearclosed timelike curves(time machines) can only be consistent with the universal laws of physics, and thus only self-consistent events can occur. Anything a time traveler does in the past must have been part of history all along, and the time traveler can never do anything to prevent the trip back in time from happening, since this would represent an inconsistency. The authors concluded that time travel need not lead to unresolvable paradoxes, regardless of what type of object was sent to the past.[39] PhysicistJoseph Polchinskiconsidered a potentially paradoxical situation involving abilliard ballthat is fired into awormholeat just the right angle such that it will be sent back in time and collides with its earlier self, knocking it off course, which would stop it from entering the wormhole in the first place.Kip Thornereferred to this problem as "Polchinski's paradox".[39]Thorne and two of his students at Caltech, Fernando Echeverria and Gunnar Klinkhammer, went on to find a solution that avoided any inconsistencies, and found that there was more than one self-consistent solution, with slightly different angles for the glancing blow in each case.[40]Later analysis by Thorne andRobert Forwardshowed that for certain initial trajectories of the billiard ball, there could be an infinite number of self-consistent solutions.[39]It is plausible that there exist self-consistent extensions for every possible initial trajectory, although this has not been proven.[41]: 184The lack of constraints on initial conditions only applies to spacetime outside of thechronology-violating region of spacetime; the constraints on the chronology-violating region might prove to be paradoxical, but this is not yet known.[41]: 187–188 Novikov's views are not widely accepted. Visser views causal loops and Novikov's self-consistency principle as anad hocsolution, and supposes that there are far more damaging implications of time travel.[42]Krasnikov similarly finds no inherent fault in causal loops but finds other problems with time travel in general relativity.[33]: 14–16Another conjecture, thecosmic censorship hypothesis, suggests that every closed timelike curve passes through anevent horizon, which prevents such causal loops from being observed.[43] The interacting-multiple-universes approach is a variation of themany-worlds interpretationof quantum mechanics that involves time travelers arriving in a different universe than the one from which they came; it has been argued that, since travelers arrive in a different universe's history and not their history, this is not "genuine" time travel.[44]Stephen Hawking has argued for thechronology protection conjecture, that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history so that time travelers remain within their world rather than traveling to a different one.[45] David Deutschhas proposed thatquantum computationwith a negative delay—backward time travel—produces only self-consistent solutions, and the chronology-violating region imposes constraints that are not apparent through classical reasoning.[46]However Deutsch's self-consistency condition has been demonstrated as capable of being fulfilled to arbitrary precision by any system subject to the laws of classicalstatistical mechanics, even if it is not built up by quantum systems.[47]Allen Everett has also argued that even if Deutsch's approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time, with different particles emerging in different worlds.[48]
https://en.wikipedia.org/wiki/Ontological_paradox
Incomputational complexity theory, theelement distinctness problemorelement uniqueness problemis the problem of determining whether all the elements of a list are distinct. It is a well studied problem in many different models of computation. The problem may be solved bysortingthe list and then checking if there are any consecutive equal elements; it may also be solved in linear expected time by arandomized algorithmthat inserts each item into ahash tableand compares only those elements that are placed in the same hash table cell.[1] Several lower bounds in computational complexity are proved by reducing the element distinctness problem to the problem in question, i.e., by demonstrating that the solution of the element uniqueness problem may be quickly found after solving the problem in question. The number of comparisons needed to solve the problem of sizen{\displaystyle n}, in a comparison-based model of computation such as adecision treeoralgebraic decision tree, isΘ(nlog⁡n){\displaystyle \Theta (n\log n)}. Here,Θ{\displaystyle \Theta }invokesbig theta notation, meaning that the problem can be solved in a number of comparisons proportional tonlog⁡n{\displaystyle n\log n}(alinearithmic function) and that all solutions require this many comparisons.[2]In these models of computation, the input numbers may not be used to index the computer's memory (as in the hash table solution) but may only be accessed by computing and comparing simple algebraic functions of their values. For these models, an algorithm based oncomparison sortsolves the problem within a constant factor of the best possible number of comparisons. The same lower bound applies as well to theexpected numberof comparisons in therandomizedalgebraic decision treemodel.[3][4] If the elements in the problem arereal numbers, the decision-tree lower bound extends to thereal random-access machinemodel with an instruction set that includes addition, subtraction and multiplication of real numbers, as well as comparison and either division or remaindering ("floor").[5]It follows that the problem's complexity in this model is alsoΘ(nlog⁡n){\displaystyle \Theta (n\log n)}. This RAM model covers more algorithms than the algebraic decision-tree model, as it encompasses algorithms that use indexing into tables. However, in this model all program steps are counted, not just decisions. A single-tape deterministicTuring machinecan solve the problem, fornelements ofm≥ lognbits each, in timeO(n2m(m+2–logn)), while on a nondeterministic machine the time complexity isO(nm(n+ logm)).[6] Quantum algorithmscan solve this problem faster, inΘ(n2/3){\textstyle \Theta (n^{2/3})}queries. The optimal algorithm is byAndris Ambainis.[7]Yaoyun Shifirst proved a tight lower bound when the size of the range is sufficiently large.[8]Ambainis[9]and Kutin[10]independently (and via different proofs) extended his work to obtain the lower bound for all functions. Elements that occur more thann/k{\displaystyle n/k}times in a multiset of sizen{\displaystyle n}may be found by a comparison-based algorithm, theMisra–Gries heavy hitters algorithm, in timeO(nlog⁡k){\displaystyle O(n\log k)}. The element distinctness problem is a special case of this problem wherek=n{\displaystyle k=n}. This time is optimal under thedecision tree modelof computation.[11]
https://en.wikipedia.org/wiki/Element_uniqueness_problem
Neural gasis anartificial neural network, inspired by theself-organizing mapand introduced in 1991 byThomas MartinetzandKlaus Schulten.[1]The neural gas is a simple algorithm for finding optimal data representations based onfeature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied wheredata compressionorvector quantizationis an issue, for examplespeech recognition,[2]image processing[3]orpattern recognition. As a robustly converging alternative to thek-means clusteringit is also used forcluster analysis.[4] Suppose we want to model aprobability distributionP(x){\displaystyle P(x)}of data vectorsx{\displaystyle x}using a finite number offeature vectorswi{\displaystyle w_{i}}, wherei=1,⋯,N{\displaystyle i=1,\cdots ,N}. In the algorithm,ε{\displaystyle \varepsilon }can be understood as the learning rate, andλ{\displaystyle \lambda }as the neighborhood range.ε{\displaystyle \varepsilon }andλ{\displaystyle \lambda }are reduced with increasingt{\displaystyle t}so that the algorithm converges after many adaptation steps. The adaptation step of the neural gas can be interpreted asgradient descenton acost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online)k-means clusteringa much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes. Compared to self-organized map, the neural gas model does not assume that some vectors are neighbors. If two vectors happen to be close together, they would tend to move together, and if two vectors happen to be apart, they would tend to not move together. In contrast, in an SOM, if two vectors are neighbors in the underlying graph, then they will always tend to move together, no matter whether the two vectors happen to be neighbors in the Euclidean space. The name "neural gas" is because one can imagine it to be what an SOM would be like if there is no underlying graph, and all points are free to move without the bonds that bind them together. A number of variants of the neural gas algorithm exists in the literature so as to mitigate some of its shortcomings. More notable is perhaps Bernd Fritzke's growing neural gas,[5]but also one should mention further elaborations such as the Growing When Required network[6]and also the incremental growing neural gas.[7]A performance-oriented approach that avoids the risk of overfitting is the Plastic Neural gas model.[8] Fritzke describes the growing neural gas (GNG) as an incremental network model that learns topological relations by using a "Hebb-like learning rule",[5]only, unlike the neural gas, it has no parameters that change over time and it is capable of continuous learning, i.e. learning on data streams. GNG has been widely used in several domains,[9]demonstrating its capabilities for clustering data incrementally. The GNG is initialized with two randomly positioned nodes which are initially connected with a zero age edge and whose errors are set to 0. Since in the GNG input data is presented sequentially one by one, the following steps are followed at each iteration: Another neural gas variant inspired by the GNG algorithm is the incremental growing neural gas (IGNG). The authors propose the main advantage of this algorithm to be "learning new data (plasticity) without degrading the previously trained network and forgetting the old input data (stability)."[7] Having a network with a growing set of nodes, like the one implemented by the GNG algorithm was seen as a great advantage, however some limitation on the learning was seen by the introduction of the parameter λ, in which the network would only be able to grow when iterations were a multiple of this parameter.[6]The proposal to mitigate this problem was a new algorithm, the Growing When Required network (GWR), which would have the network grow more quickly, by adding nodes as quickly as possible whenever the network identified that the existing nodes would not describe the input well enough. The ability to only grow a network may quickly introduce overfitting; on the other hand, removing nodes on the basis of age only, as in the GNG model, does not ensure that the removed nodes are actually useless, because removal depends on a model parameter that should be carefully tuned to the "memory length" of the stream of input data. The "Plastic Neural Gas" model[8]solves this problem by making decisions to add or remove nodes using an unsupervised version of cross-validation, which controls an equivalent notion of "generalization ability" for the unsupervised setting. While growing-only methods only cater for theincremental learningscenario, the ability to grow and shrink is suited to the more generalstreaming dataproblem. To find the rankingi0,i1,…,iN−1{\displaystyle i_{0},i_{1},\ldots ,i_{N-1}}of the feature vectors, the neural gas algorithm involves sorting, which is a procedure that does not lend itself easily to parallelization or implementation in analog hardware. However, implementations in both parallel software[10]and analog hardware[11]were actually designed.
https://en.wikipedia.org/wiki/Neural_gas
In mathematics, acompact semigroupis asemigroupin which the sets of solutions to equations can be described by finite sets of equations. The term "compact" here does not refer to anytopologyon the semigroup. LetSbe a semigroup andXa finite set of letters. A system of equations is a subsetEof theCartesian productX∗×X∗of thefree monoid(finite strings) overXwith itself. The systemEis satisfiable inSif there is a mapffromXtoS, which extends to asemigroup morphismffromX+toS, such that for all (u,v) inEwe havef(u) =f(v) inS. Such anfis asolution, orsatisfying assignment, for the systemE.[1] Two systems of equations areequivalentif they have the same set of satisfying assignments. A system of equations ifindependentif it is not equivalent to a proper subset of itself.[1]A semigroup iscompactif every independent system of equations is finite.[2] The class of compact semigroups does not form anequational variety. However, a variety of monoids has the property that all its members are compact if and only if all finitely generated members satisfy themaximal condition on congruences(any family of congruences, ordered by inclusion, has a maximal element).[8]
https://en.wikipedia.org/wiki/Compact_semigroup
The termdigital card[1]can refer to a physical item, such as a memory card on a camera,[2][3]or, increasingly since 2017, to the digital content hosted as avirtual cardorcloud card, as a digital virtual representation of a physical card. They share a common purpose:identity management,credit card,debit cardordriver's license. A non-physical digital card, unlike amagnetic stripe card, canemulate(imitate) any kind of card.[4][1] Asmartphoneorsmartwatchcan store content from the card issuer; discount offers and news updates can be transmitted wirelessly, viaInternet. These virtual cards are used in very high volumes by the mass transit sector, replacing paper-based tickets and the earlier magnetic strip cards.[5] Magnetic recording on steel tape and wire was invented byValdemar Poulsenin Denmark around 1900 for recording audio.[6]In the 1950s, magnetic recording of digital computer data on plastic tape coated with iron oxide was invented. In 1960,IBMbuilt upon the magnetic tape idea and developed a reliable way of securing magnetic stripes toplastic cards,[7]as part of a contract with the US government for a security system. A number ofInternational Organization for Standardizationstandards,ISO/IEC 7810,ISO/IEC 7811,ISO/IEC 7812,ISO/IEC 7813,ISO 8583, andISO/IEC 4909, now define the physical properties of such cards, including size, flexibility, location of the magstripe, magnetic characteristics, and data formats. Those standards also specify characteristics for financial cards, including the allocation of card number ranges to different card issuing institutions. As technological progress emerged in the form of highly capable and always carriedsmartphones,handheldsandsmartwatches, the term "digital card" was introduced.[1] On May 26, 2011Googlereleased its own version of a cloud hostedGoogle Walletwhich contains digital cards - cards that can be created online without having to have a plastic card in first place, although all of its merchants currently issue both plastic and digital cards.[8]There are several virtual card issuing companies located in different geographical regions, such as Weel in Australia and Privacy in the USA. Amagnetic stripe cardis a type of card capable of storing data bystoring it on magnetic materialattached to a plastic card. A computer device can update the card's content. The magnetic stripe is read by swiping it past amagnetic reading head. Magnetic stripe cards are commonly used incredit cards,identity cards, and transportation tickets. They may also contain aradio frequency identification (RFID)tag, atransponder deviceand/or amicrochipmostly used foraccess controlor electronic payment. Magnetic storage was known from World War II and computer data storage in the 1950s.[7] In 1969an IBM engineerhad the idea of attaching a piece of magnetic tape, the predominant storage medium at the time, to a plastic card base. He tried it, but the result was unsatisfactory. Strips of tape warped easily, and the tape's function was negatively affected by adhesives he used to attach it to the card. After a frustrating day in the laboratory trying to find an adhesive that would hold the tape securely without affecting its function, he came home with several pieces of magnetic tape and several plastic cards. As he entered his home his wife was ironing clothing. When he explained the source of his frustration – inability to get the tape to "stick" to the plastic so that it would not come off, but without compromising its function – she suggested that he use the iron to melt the stripe on. He tried it and it worked.[9][10]The heat of the iron was just high enough to bond the tape to the card. Incremental improvements from 1969 through 1973 enabled developing and selling implementations of what became known as theUniversal Product Code(UPC).[11][12][13]This engineering effort resulted in IBM producing the first magnetic striped plastic credit and ID cards used by banks, insurance companies, hospitals and many others.[11][14] Initial customers included banks, insurance companies and hospitals, who provided IBM with raw plastic cards preprinted with their logos, along with a list of the contact information and data which was to be encoded and embossed on the cards.[14]Manufacturing involved attaching the magnetic stripe to the preprinted plastic cards using the hot stamping process developed by IBM.[15][16] IBM's development work, begun in 1969, but still needed more work. Steps required to convert the magnetic striped media into an industry acceptable device included: These steps were initially managed byJerome Svigalsof the Advanced Systems Division of IBM,Los Gatos, California, from 1966 to 1975. In most magnetic stripe cards, the magnetic stripe is contained in a plastic-like film. The magnetic stripe is located 0.223 inches (5.7 mm) from the edge of the card, and is 0.375 inches (9.5 mm) wide. The magnetic stripe contains three tracks, each 0.110 inches (2.8 mm) wide. Tracks one and three are typically recorded at 210 bits per inch (8.27 bits per mm), while track two typically has a recording density of 75 bits per inch (2.95 bits per mm). Each track can either contain 7-bit alphanumeric characters, or 5-bit numeric characters. Track 1 standards were created by theairlines industry (IATA). Track 2 standards were created by thebanking industry (ABA). Track 3 standards were created by the thrift-savings industry. Magstripes following these specifications can typically be read by mostpoint-of-salehardware, which are simply general-purpose computers that have been programmed to perform the required tasks. Examples of cards adhering to these standards includeATM cards,bank cards(credit and debit cards includingVisaandMasterCard),gift cards,loyalty cards,driver's licenses,telephone cards,membership cards, electronic benefit transfer cards (e.g.food stamps), and nearly any application in which monetary value or secure information isnotstored on the card itself. Many video game and amusement centers now use debit card systems based on magnetic stripe cards. Magnetic stripe cloning can be detected by the implementation of magnetic card reader heads and firmware that can read a signature of magnetic noise permanently embedded in all magnetic stripes during the card production process. This signature can be used in conjunction with common two-factor authentication schemes utilized in ATM, debit/retail point-of-sale and prepaid card applications.[17] Some types of cards intentionally ignore the ISO standards regarding which kind of data is recorded in each track, and use their own data sequences instead; these include hotel key cards, most subway and bus cards, and some national prepaid calling cards (such as for the country ofCyprus) in which the balance is stored and maintained directly on the stripe and not retrieved from a remote database. There are up to three tracks on magnetic cards known as tracks 1, 2, and 3. Track 3 is virtually unused by the major worldwide networks[citation needed], and often is not even physically present on the card by virtue of a narrower magnetic stripe. Point-of-sale card readers almost always read track 1, or track 2, and sometimes both, in case one track is unreadable. The minimum cardholder account information needed to complete a transaction is present on both tracks. Track 1 has a higher bit density (210 bits per inch vs. 75), is the only track that may contain alphabetic text, and hence is the only track that contains the cardholder's name. Track 1 is written with code known asDECSIXBITplus oddparity. The information on track 1 on financial cards is contained in several formats:A, which is reserved for proprietary use of the card issuer,B, which is described below,C-M, which are reserved for use by ANSI Subcommittee X3B10 andN-Z, which are available for use by individual card issuers: Format B: This format was developed by the banking industry (ABA). This track is written with a 5-bit scheme (4 data bits + 1 parity), which allows for sixteen possible characters, which are the numbers 0–9, plus the six characters: ; < = > ?. (It may seem odd that these particular punctuation symbols were selected, but by using them the set of sixteen characters matches theASCIIrange 0x30 through 0x3f.) The data format is as follows: Service codevalues common in financial cards: First digit Second digit Third digit The data stored on magnetic stripes on American and Canadian driver's licenses is specified by theAmerican Association of Motor Vehicle Administrators. Not all states and provinces use a magnetic stripe on their driver's licenses. For a list of those that do, see the AAMVA list.[18][19] The following data is stored on track 1:[20] The following data is stored on track 2: The following data is stored on track 3: Note: Each state has a different selection of information they encode, not all states are the same. Note: Some states, such as Texas,[22]have laws restricting the access and use of electronically readable information encoded on driver's licenses or identification cards under certain circumstances. Smart cardsare a newer generation of card that contain anintegrated circuit. Some smart cards have metal contacts to electrically connect the card to thereader; there are alsocontactless cardsthat use a magnetic field or radio frequency (RFID) for proximity reading. Hybrid smart cards include a magnetic stripe in addition to the chip—this combination is most commonly found inpayment cards, to make them usable at payment terminals that do not include a smart card reader. Cards that contain all three features (magnetic stripe, smart card chip, and RFID chip) are also becoming common as more activities require the use of such cards.[citation needed] DuringDEF CON24, Weston Hecker presentedHacking Hotel Keys, and Point Of Sales Systems.In the talk, Hecker described the way magnetic strip cards function and utilised spoofing software,[23]and anArduinoto obtain administrative access from hotel keys, via service staff walking past him. Hecker claims he used administrative keys from POS systems on other systems, effectively providing access to any system with a magnetic stripe reader, providing access to run privileged commands.[citation needed] Identification with a digital card is usually done in several ways:
https://en.wikipedia.org/wiki/Magnetic_Stripe
LTE Advanced, also named or recognized asLTE+,LTE-Aor4G+, is a4Gmobilecellularcommunication standard developed by3GPPas a major enhancement of theLong Term Evolution(LTE) standard. Three technologies from the LTE-Advanced tool-kit –carrier aggregation, 4x4MIMOand256QAMmodulation in the downlink – if used together and with sufficient aggregated bandwidth, can deliver maximum peak downlink speeds approaching, or even exceeding, 1 Gbit/s. This is significantly more than the peak 300 Mbit/s rate offered by the preceding LTE standard.[1]Later developments have resulted inLTE Advanced Pro(or4.9G) which increases bandwidth even further.[2] The first ever LTE Advanced network was deployed in 2013 bySK Telecomin South Korea.[3]In August 2019, theGlobal mobile Suppliers Association(GSA) reported that there were 304 commercially launched LTE-Advanced networks in 134 countries. Overall, 335 operators are investing in LTE-Advanced (in the form of tests, trials, deployments or commercial service provision) in 141 countries.[4] LTE Advanced is also named (indicated as)LTE+,LTE-A,[5]or (onSamsung GalaxyandXiaomismartphones) as4G+. Such networks have also often been described as ‘Gigabit LTEnetworks’ mirroring a term that is also used in the fixed broadband industry.[6] The mobile communication industry and standards organizations have therefore started work on 4G access technologies, such as LTE Advanced.[when?]At a workshop in April 2008 in China, 3GPP agreed the plans for work on Long Term Evolution (LTE).[7]A first set of specifications were approved in June 2008.[8]Besides the peak data rate 1Gb/sas defined by the ITU-R, it also targets faster switching between power states and improved performance at the cell edge. Detailed proposals are being studied within theworking groups.[when?]The LTE+ format was first proposed byNTT DoCoMoofJapanand has been adopted as the international standard.[9]It was formally submitted as a candidate4GtoITU-Tin late 2009 as meeting the requirements of theIMT-Advancedstandard, and was standardized by the 3rd Generation Partnership Project (3GPP) in March 2011 as 3GPP Release 10.[10] The work by3GPPto define a4Gcandidate radio interface technology started in Release 9 with the study phase for LTE-Advanced. Being described as a3.9G(beyond 3G but pre-4G), the first release of LTE did not meet the requirements for4G(also calledIMT Advancedas defined by theInternational Telecommunication Union) such as peak data rates up to 1Gb/s. The ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements in a circular letter, 3GPP Technical Report (TR) 36.913, "Requirements for Further Advancements forE-UTRA(LTE-Advanced)."[11]These are based on ITU's requirements for4Gand on operators’ own requirements for advanced LTE. Major technical considerations include the following: Likewise, 'WiMAX 2', 802.16m, has been approved by ITU as theIMT Advancedfamily. WiMAX 2 is designed to be backward compatible with WiMAX 1 devices. Most vendors now support conversion of 'pre-4G', pre-advanced versions and some support software upgrades of base station equipment from 3G. The target of 3GPP LTE Advanced is to reach and surpass theITUrequirements. LTE Advanced should be compatible with first release LTE equipment, and should share frequency bands with first release LTE. In the feasibility study for LTE Advanced,3GPPdetermined that LTE Advanced would meet theITU-Rrequirements for4G. The results of the study are published in3GPPTechnical Report (TR) 36.912.[12] One of the important LTE Advanced benefits is the ability to take advantage of advanced topology networks; optimized heterogeneous networks with a mix of macrocells with low power nodes such aspicocells,femtocellsand new relay nodes. The next significant performance leap in wireless networks will come from making the most of topology, and brings the network closer to the user by adding many of these low power nodes – LTE Advanced further improves the capacity and coverage, and ensures user fairness. LTE Advanced also introduces multicarrier to be able to use ultra wide bandwidth, up to 100 MHz of spectrum supporting very high data rates. In the research phase many proposals have been studied as candidates for LTE Advanced (LTE-A) technologies. The proposals could roughly be categorized into:[13] Within the range of system development, LTE-Advanced and WiMAX 2 can use up to 8x8MIMOand 128-QAMin downlink direction. Example performance: 100 MHz aggregated bandwidth, LTE-Advanced provides almost 3.3 Gbit peak download rates per sector of the base station under ideal conditions. Advanced network architectures combined with distributed and collaborative smart antenna technologies provide several years road map of commercial enhancements. The3GPPstandards Release 12 added support for 256-QAM. A summary of a study carried out in 3GPP can be found in TR36.912.[14] Original standardization work for LTE-Advanced was done as part of 3GPP Release 10, which was frozen in April 2011. Trials were based on pre-release equipment. Major vendors support software upgrades to later versions and ongoing improvements. In order to improve the quality of service for users in hotspots and on cell edges,heterogeneous networks(HetNets) are formed of a mixture of macro-, pico- and femto base stations serving corresponding-size areas. Frozen in December 2012, 3GPP Release 11[15]concentrates on better support of HetNet. Coordinated Multi-Point operation (CoMP) is a key feature of Release 11 in order to support such network structures. Whereas users located at a cell edge in homogenous networks suffer from decreasing signal strength compounded by neighbor cell interference, CoMP is designed to enable use of a neighboring cell to also transmit the same signal as the serving cell, enhancing quality of service on the perimeter of a serving cell. In-device Co-existence (IDC) is another topic addressed in Release 11. IDC features are designed to ameliorate disturbances within the user equipment caused between LTE/LTE-A and the various other radio subsystems such as WiFi, Bluetooth, and the GPS receiver. Further enhancements for MIMO such as 4x4 configuration for the uplink were standardized. The higher number of cells in HetNet results in user equipment changing the serving cell more frequently when in motion. The ongoing work on LTE-Advanced[16]in Release 12, amongst other areas, concentrates on addressing issues that come about when users move through HetNet, such as frequent hand-overs between cells. It also included use of 256-QAM. This list covers technology demonstrations and field trials up to the year 2014, paving the way for a wider commercial deployment of the VoLTE technology worldwide. From 2014 onwards various further operators trialled and demonstrated the technology for future deployment on their respective networks. These are not covered here. Instead a coverage of commercial deployments can be found in the section below. LTE Advanced Pro(LTE-A Pro, also known as4.5G,4.5G Pro,4.9G,Pre-5G,5G Project)[45][46][47][48]is a name for3GPPrelease 13 and 14.[49][50]It is an evolution of LTE Advanced (LTE-A) cellular standard supporting data rates in excess of 3 Gbit/s using 32-carrier aggregation.[2]It also introduces the concept ofLicense Assisted Access, which allows sharing of licensed and unlicensed spectrum. Additionally, it incorporates several new technologies associated with5G, such as 256-QAM, MassiveMIMO,LTE-UnlicensedandLTE IoT,[51][52]that facilitated early migration of existing networks to enhancements promised with the full 5G standard.[53] Telstrain Australia deployed the very first LTE Advanced Pro network in January 2017.[54] LTE for UMTS - OFDMA and SC-FDMA Based Radio Access,ISBN978-0-470-99401-6Chapter 2.6: LTE Advanced for IMT-advanced, pp. 19–21. Resources (white papers, technical papers, application notes)
https://en.wikipedia.org/wiki/LTE_Advanced
Inmultiprocessorcomputer systems,software lockoutis the issue of performance degradation due to the idle wait times spent by theCPUsinkernel-levelcritical sections. Software lockout is the major cause ofscalabilitydegradation in a multiprocessor system, posing a limit on the maximum useful number of processors. To mitigate the phenomenon, the kernel must be designed to have itscritical sectionsas short as possible, therefore decomposing eachdata structurein smaller substructures. In most multiprocessor systems, each processor schedules and controls itself, therefore there's no "supervisor" processor,[1]and kerneldata structuresare globally shared; sections of code that access those shared data structures arecritical sections. This design choice is made to improve scaling, reliability and modularity.[1]Examples of such kernel data structure areready listandcommunication channels. A "conflict" happens when more than oneprocessoris trying to access the same resource (a memory portion) at the same time. To preventcritical racesandinconsistency, only one processor (CPU) at a given time is allowed to access a particulardata structure(a memory portion), while other CPUs trying to access at the same time arelocked-out, waiting in idle status.[1][2] Three cases can be distinguished when this idle wait is either necessary, convenient, or not convenient. The idle wait is necessary when the access is to a ready list for a low levelschedulingoperation. The idle wait is not necessary but convenient in the case of a critical section forsynchronization/IPCoperations, which require less time than acontext switch(executing anotherprocessto avoid idle wait). Idle wait is instead not convenient in case of a kernel critical section fordevice management, present inmonolithic kernelsonly. Amicrokernelinstead falls on just the first two of the above cases. In a multiprocessor system, most of the conflicts arekernel-level conflicts, due to the access to the kernel level critical sections, and thus the idle wait periods generated by them have a major impact in performance degradation. This idle wait time increases the average number of idle processors and thus decreasesscalabilityandrelative efficiency. Taking as parameters the average time interval spent by aprocessorin kernel level critical sections (L, for time in locked state), and the average time interval spent by a processor in tasks outside critical sections (E),[1]the ratioL/Eis crucial in evaluating software lockout. Typical values forL/Erange from 0.01 to 0.1.[3]In a system with aL/Eratio of 0.05, for instance, if there are 15 CPUs, it is expected that on average 1 CPU will always be idle;[3]with 21 CPUs, 2.8 will be idle;[4]with 40 CPUs, 19 will be idle; with 41 CPUs, 20 will be idle.[3]Therefore, adding more than 40 CPUs to that system would be useless. In general, for eachL/Evalue, there's a threshold for the maximum number of useful CPUs. To reduce the performance degradation of software lockout to reasonable levels (L/Ebetween 0.05 and 0.1), the kernel and/or the operating system must be designed accordingly. Conceptually, the most valid solution is to decompose each kernel data structure in smaller independent substructures, having each a shorter elaboration time. This allows more than one CPU to access the original data structure. Manyuniprocessorsystems withhierarchical protection domainshave been estimated to spend up to 50% of the time performing "supervisor mode" operations. If such systems were adapted formultiprocessingby setting a lock at any access to "supervisor state",L/Ewould easily be greater than 1,[3]resulting in a system with the same throughput as the uniprocessor despite the number of CPUs.
https://en.wikipedia.org/wiki/Software_lockout
"Paper terrorism" is aneologismreferring to the use offalse liens,frivolous lawsuits, bogusletters of credit, and other legal orpseudolegaldocuments lacking sound factual basis as a method of harassment against an opponent on a scale described as evocative of conventionalarmed terrorism.[1]These methods are popular among some Americananti-governmentgroups[2]and those associated with theredemption movement.[3] Mark Pitcavageof theAnti-Defamation Leaguestates that these methods were pioneered by thePosse Comitatus.[4]Some victims of paper terrorism have been forced to declarebankruptcy.[5]An article by theSouthern Poverty Law Centerstates that another tactic is filing reports with theInternal Revenue Servicefalsely accusing their political enemies of having unreported income.[6] Such frivolous lawsuits also clog the court system making it more difficult to process other cases and including using challenges to the titles of property owned by government officials and others.[7]Another method of paper terrorism is filingbankruptcy petitionsagainst others in an effort to ruin theircredit ratings.[8] In the late 1990s,[9]the "Republic of Texas", a militia group claiming thatTexaswas legally independent, carried out what it called "a campaign of paper terrorism" using bogus land claims and bad checks to try to congest Texas courts.[10] This article aboutpoliticsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Paper_terrorism
Instatistics, sometimes thecovariance matrixof amultivariate random variableis not known but has to beestimated.Estimation of covariance matricesthen deals with the question of how to approximate the actual covariance matrix on the basis of a sample from themultivariate distribution. Simple cases, where observations are complete, can be dealt with by using thesample covariance matrix. The sample covariance matrix (SCM) is anunbiasedandefficient estimatorof the covariance matrix if the space of covariance matrices is viewed as anextrinsicconvex coneinRp×p; however, measured using theintrinsic geometryofpositive-definite matrices, the SCM is abiasedand inefficient estimator.[1]In addition, if the random variable has anormal distribution, the sample covariance matrix has aWishart distributionand a slightly differently scaled version of it is themaximum likelihood estimate. Cases involvingmissing data,heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is therobustnesstooutliers, to which sample covariance matrices are highly sensitive.[2][3][4] Statistical analyses of multivariate data often involve exploratory studies of the way in which the variables change in relation to one another and this may be followed up by explicit statistical models involving the covariance matrix of the variables. Thus the estimation of covariance matrices directly from observational data plays two roles: Estimates of covariance matrices are required at the initial stages ofprincipal component analysisandfactor analysis, and are also involved in versions ofregression analysisthat treat thedependent variablesin a data-set, jointly with theindependent variableas the outcome of a random sample. Given asampleconsisting ofnindependent observationsx1,...,xnof ap-dimensionalrandom vectorX∈Rp×1(ap×1 column-vector), anunbiasedestimatorof the (p×p)covariance matrix is thesample covariance matrix wherexi{\displaystyle x_{i}}is thei-th observation of thep-dimensional random vector, and the vector is thesample mean. This is true regardless of the distribution of the random variableX, provided of course that the theoretical means and covariances exist. The reason for the factorn− 1 rather thannis essentially the same as the reason for the same factor appearing in unbiased estimates ofsample variancesandsample covariances, which relates to the fact that the mean is not known and is replaced by the sample mean (seeBessel's correction). In cases where the distribution of therandom variableXis known to be within a certain family of distributions, other estimates may be derived on the basis of that assumption. A well-known instance is when therandom variableXisnormally distributed: in this case themaximum likelihoodestimatorof the covariance matrix is slightly different from the unbiased estimate, and is given by A derivation of this result is given below. Clearly, the difference between the unbiased estimator and the maximum likelihood estimator diminishes for largen. In the general case, the unbiased estimate of the covariance matrix provides an acceptable estimate when the data vectors in the observed data set are all complete: that is they contain nomissing elements. One approach to estimating the covariance matrix is to treat the estimation of each variance or pairwise covariance separately, and to use all the observations for which both variables have valid values. Assuming the missing data aremissing at randomthis results in an estimate for the covariance matrix which is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. When estimating thecross-covarianceof a pair of signals that arewide-sense stationary, missing samples donotneed be random (e.g., sub-sampling by an arbitrary factor is valid).[citation needed] A random vectorX∈Rp(ap×1 "column vector") has a multivariate normal distribution with a nonsingular covariance matrix Σ precisely if Σ ∈Rp×pis apositive-definite matrixand theprobability density functionofXis whereμ∈Rp×1is theexpected valueofX. Thecovariance matrixΣ is the multidimensional analog of what in one dimension would be thevariance, and normalizes the densityf(x){\displaystyle f(x)}so that it integrates to 1. Suppose now thatX1, ...,Xnareindependentand identically distributed samples from the distribution above. Based on theobserved valuesx1, ...,xnof thissample, we wish to estimate Σ. The likelihood function is: It is fairly readily shown that themaximum-likelihoodestimate of the mean vectorμis the "sample mean" vector: Seethe section on estimation in the article on the normal distributionfor details; the process here is similar. Since the estimatex¯{\displaystyle {\bar {x}}}does not depend on Σ, we can just substitute it forμin thelikelihood function, getting and then seek the value of Σ that maximizes the likelihood of the data (in practice it is easier to work with logL{\displaystyle {\mathcal {L}}}). Now we come to the first surprising step: regard thescalar(xi−x¯)TΣ−1(xi−x¯){\displaystyle (x_{i}-{\overline {x}})^{\mathrm {T} }\Sigma ^{-1}(x_{i}-{\overline {x}})}as thetraceof a 1×1 matrix. This makes it possible to use the identity tr(AB) = tr(BA) wheneverAandBare matrices so shaped that both products exist. We get where S{\displaystyle S}is sometimes called thescatter matrix, and is positive definite if there exists a subset of the data consisting ofp{\displaystyle p}affinely independent observations (which we will assume). It follows from thespectral theoremoflinear algebrathat a positive-definite symmetric matrixShas a unique positive-definite symmetric square rootS1/2. We can again use the"cyclic property"of the trace to write LetB=S1/2Σ−1S1/2. Then the expression above becomes The positive-definite matrixBcan be diagonalized, and then the problem of finding the value ofBthat maximizes Since the trace of a square matrix equals the sum of eigenvalues ("trace and eigenvalues"), the equation reduces to the problem of finding the eigenvaluesλ1, ...,λpthat maximize This is just a calculus problem and we getλi=nfor alli.Thus, assumeQis the matrix of eigen vectors, then i.e.,ntimes thep×pidentity matrix. Finally we get i.e., thep×p"sample covariance matrix" is the maximum-likelihood estimator of the "population covariance matrix" Σ. At this point we are using a capitalXrather than a lower-casexbecause we are thinking of it "as an estimator rather than as an estimate", i.e., as something random whose probability distribution we could profit by knowing. The random matrixScan be shown to have aWishart distributionwithn− 1 degrees of freedom.[5]That is: An alternative derivation of the maximum likelihood estimator can be performed viamatrix calculusformulae (see alsodifferential of a determinantanddifferential of the inverse matrix). It also verifies the aforementioned fact about the maximum likelihood estimate of the mean. Re-write the likelihood in the log form using the trace trick: The differential of this log-likelihood is It naturally breaks down into the part related to the estimation of the mean, and to the part related to the estimation of the variance. Thefirst order conditionfor maximum,dln⁡L(μ,Σ)=0{\displaystyle d\ln {\mathcal {L}}(\mu ,\Sigma )=0}, is satisfied when the terms multiplyingdμ{\displaystyle d\mu }anddΣ{\displaystyle d\Sigma }are identically zero. Assuming (the maximum likelihood estimate of)Σ{\displaystyle \Sigma }is non-singular, the first order condition for the estimate of the mean vector is which leads to the maximum likelihood estimator This lets us simplify as defined above. Then the terms involvingdΣ{\displaystyle d\Sigma }indln⁡L{\displaystyle d\ln L}can be combined as The first order conditiondln⁡L(μ,Σ)=0{\displaystyle d\ln {\mathcal {L}}(\mu ,\Sigma )=0}will hold when the term in the square bracket is (matrix-valued) zero. Pre-multiplying the latter byΣ{\displaystyle \Sigma }and dividing byn{\displaystyle n}gives which of course coincides with the canonical derivation given earlier. Dwyer[6]points out that decomposition into two terms such as appears above is "unnecessary" and derives the estimator in two lines of working. Note that it may be not trivial to show that such derived estimator is the unique global maximizer for likelihood function. Given asampleofnindependent observationsx1,...,xnof ap-dimensional zero-mean Gaussian random variableXwith covarianceR, themaximum likelihoodestimatorofRis given by The parameterR{\displaystyle R}belongs to the set ofpositive-definite matrices, which is aRiemannian manifold, not avector space, hence the usual vector-space notions ofexpectation, i.e. "E[R^]{\displaystyle \mathrm {E} [{\hat {\mathbf {R} }}]}", andestimator biasmust be generalized to manifolds to make sense of the problem of covariance matrix estimation. This can be done by defining the expectation of a manifold-valued estimatorR^{\displaystyle {\hat {\mathbf {R} }}}with respect to the manifold-valued pointR{\displaystyle R}as where are theexponential mapand inverse exponential map, respectively, "exp" and "log" denote the ordinarymatrix exponentialandmatrix logarithm, and E[·] is the ordinary expectation operator defined on a vector space, in this case thetangent spaceof the manifold.[1] Theintrinsic biasvector fieldof the SCM estimatorR^{\displaystyle {\hat {\mathbf {R} }}}is defined to be The intrinsic estimator bias is then given byexpR⁡B(R^){\displaystyle \exp _{\mathbf {R} }\mathbf {B} ({\hat {\mathbf {R} }})}. ForcomplexGaussian random variables, this bias vector field can be shown[1]to equal where and ψ(·) is thedigamma function. The intrinsic bias of the sample covariance matrix equals and the SCM is asymptotically unbiased asn→ ∞. Similarly, the intrinsicinefficiencyof the sample covariance matrix depends upon theRiemannian curvatureof the space of positive-definite matrices. If the sample sizenis small and the number of considered variablespis large, the above empirical estimators of covariance and correlation are very unstable. Specifically, it is possible to furnish estimators that improve considerably upon the maximum likelihood estimate in terms of mean squared error. Moreover, forn<p(the number of observations is less than the number of random variables) the empirical estimate of the covariance matrix becomessingular, i.e. it cannot be inverted to compute theprecision matrix. As an alternative, many methods have been suggested to improve the estimation of the covariance matrix. All of these approaches rely on the concept of shrinkage. This is implicit inBayesian methodsand in penalizedmaximum likelihoodmethods and explicit in theStein-type shrinkage approach. A simple version of a shrinkage estimator of the covariance matrix is represented by the Ledoit-Wolf shrinkage estimator.[7][8][9][10]One considers aconvex combinationof the empirical estimator (A{\displaystyle A}) with some suitable chosen target (B{\displaystyle B}), e.g., the diagonal matrix. Subsequently, the mixing parameter (δ{\displaystyle \delta }) is selected to maximize the expected accuracy of the shrunken estimator. This can be done bycross-validation, or by using an analytic estimate of the shrinkage intensity. The resulting regularized estimator (δA+(1−δ)B{\displaystyle \delta A+(1-\delta )B}) can be shown to outperform the maximum likelihood estimator for small samples. For large samples, the shrinkage intensity will reduce to zero, hence in this case the shrinkage estimator will be identical to the empirical estimator. Apart from increased efficiency the shrinkage estimate has the additional advantage that it is always positive definite and well conditioned. Various shrinkage targets have been proposed: The shrinkage estimator can be generalized to a multi-target shrinkage estimator that utilizes several targets simultaneously.[11] The Ledoit-Wolf shrinkage has been applied in many fields.[12]It is particularly useful to computePartial_correlationsfrom high-dimensional data (n<p).[13] Software for computing a covariance shrinkage estimator is available inR(packagescorpcor[14]andShrinkCovMat[15]), inPython(scikit-learnlibrary[1]), and inMATLAB.[16]
https://en.wikipedia.org/wiki/Estimation_of_covariance_matrices