text
stringlengths
21
172k
source
stringlengths
32
113
40-track modeis asteganographictechnique that allows for hidden data on a 3.5 inchfloppy diskette. A 3.5 inch 1.44MB mini-floppy diskette contains 80 tracks, 18 sectors per track, and 512 bytes per sector. A 3.5 inch 720k diskette contains 80 tracks, 9 sectors per track, and 512 bytes per sector. This technique refers to formatting an 80-track 1.44MB diskette as a special 40-track 720KB diskette. In doing so one may create, in effect, two 40-track partitions. The former of these partitions is visible and usable as in a normal diskette, and the latter hidden. One may then fill the unallocated (hidden) 40 tracks with up to 720KB of secret or encrypted data, that will not be superficially visible to a user. Writing a 1.44MB floppy in 40-track mode causes the allocated tracks to be written to the actual even numbered tracks, thus causing drives attempting to read a 1.44MB diskette as a 720KB diskette to become confused because of the strange data in between even numbered tracks. The hidden data then resides on the odd numbered tracks. Generally, device drivers only copy allocated data, and thus traditional copies of such a disk would generally not contain the hidden data. Equivalents of this technique can easily be done on almost any media. This technique is different than the "40th track" copy protection schemes used during the 80s and 90s. TheKGBand seniorFBIagentRobert Hanssenused this technique to communicate with one another between 1985 and 2001.[1]
https://en.wikipedia.org/wiki/40-track_mode
Coordinate descentis anoptimization algorithmthat successively minimizes along coordinate directions to find the minimum of afunction. At each iteration, the algorithm determines acoordinateor coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. Aline searchalong the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts. Coordinate descent is based on the idea that the minimization of a multivariable functionF(x){\displaystyle F(\mathbf {x} )}can be achieved by minimizing it along one direction at a time, i.e., solving univariate (or at least much simpler) optimization problems in a loop.[1]In the simplest case ofcyclic coordinate descent, one cyclically iterates through the directions, one at a time, minimizing the objective function with respect to each coordinate direction at a time. That is, starting with initial variable values roundk+1{\displaystyle k+1}definesxk+1{\displaystyle \mathbf {x} ^{k+1}}fromxk{\displaystyle \mathbf {x} ^{k}}by iteratively solving the single variable optimization problems for each variablexi{\displaystyle x_{i}}ofx{\displaystyle \mathbf {x} }, fori{\displaystyle i}from 1 ton{\displaystyle n}. Thus, one begins with an initial guessx0{\displaystyle \mathbf {x} ^{0}}for alocal minimumofF{\displaystyle F}, and gets a sequencex0,x1,x2,…{\displaystyle \mathbf {x} ^{0},\mathbf {x} ^{1},\mathbf {x} ^{2},\dots }iteratively. By doingline searchin each iteration, one automatically has It can be shown that this sequence has similar convergence properties as steepest descent. No improvement after one cycle ofline searchalong coordinate directions implies a stationary point is reached. This process is illustrated below. In the case of acontinuously differentiablefunctionF, a coordinate descent algorithm can besketchedas:[1] The step size can be chosen in various ways, e.g., by solving for the exact minimizer off(xi) =F(x)(i.e.,Fwith all variables butxifixed), or by traditional line search criteria.[1] Coordinate descent has two problems. One of them is the case of a non-smoothobjective function. The following picture shows that coordinate descent iteration may get stuck at a non-stationary pointif the level curves of the function are not smooth. Suppose that the algorithm is at the point(−2, −2); then there are two axis-aligned directions it can consider for taking a step, indicated by the red arrows. However, every step along these two directions will increase the objective function's value (assuming a minimization problem), so the algorithm will not take any step, even though both steps together would bring the algorithm closer to the optimum. While this example shows that coordinate descent does not necessarily converge to the optimum, it is possible to show formal convergence under reasonable conditions.[3] The other problem is difficulty in parallelism. Since the nature of coordinate descent is to cycle through the directions and minimize the objective function with respect to each coordinate direction, coordinate descent is not an obvious candidate for massive parallelism. Recent research works have shown that massive parallelism is applicable to coordinate descent by relaxing the change of the objective function with respect to each coordinate direction.[4][5][6] Coordinate descent algorithms are popular with practitioners owing to their simplicity, but the same property has led optimization researchers to largely ignore them in favor of more interesting (complicated) methods.[1]An early application of coordinate descent optimization was in the area of computed tomography[7]where it has been found to have rapid convergence[8]and was subsequently used for clinical multi-slice helical scan CT reconstruction.[9]A cyclic coordinate descent algorithm (CCD) has been applied in protein structure prediction.[10]Moreover, there has been increased interest in the use of coordinate descent with the advent of large-scale problems inmachine learning, where coordinate descent has been shown competitive to other methods when applied to such problems as training linearsupport vector machines[11](seeLIBLINEAR) andnon-negative matrix factorization.[12]They are attractive for problems where computing gradients is infeasible, perhaps because the data required to do so are distributed across computer networks.[13]
https://en.wikipedia.org/wiki/Coordinate_descent
Inmathematicsandcomputer science, analgorithm(/ˈælɡərɪðəm/ⓘ) is a finite sequence ofmathematically rigorousinstructions, typically used to solve a class of specificproblemsor to perform acomputation.[1]Algorithms are used as specifications for performingcalculationsanddata processing. More advanced algorithms can useconditionalsto divert the code execution through various routes (referred to asautomated decision-making) and deduce validinferences(referred to asautomated reasoning). In contrast, aheuristicis an approach to solving problems without well-defined correct or optimal results.[2]For example, although social mediarecommender systemsare commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation. As aneffective method, an algorithm can be expressed within a finite amount of space and time[3]and in a well-definedformal language[4]for calculating afunction.[5]Starting from an initial state and initial input (perhapsempty),[6]the instructions describe a computation that, whenexecuted, proceeds through a finite[7]number of well-defined successive states, eventually producing "output"[8]and terminating at a final ending state. The transition from one state to the next is not necessarilydeterministic; some algorithms, known asrandomized algorithms, incorporate random input.[9] Around 825 AD, Persian scientist and polymathMuḥammad ibn Mūsā al-Khwārizmīwrotekitāb al-ḥisāb al-hindī("Book of Indian computation") andkitab al-jam' wa'l-tafriq al-ḥisāb al-hindī("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving theHindu–Arabic numeral systemandarithmeticappeared, for exampleLiber Alghoarismi de practica arismetrice, attributed toJohn of Seville, andLiber Algorismi de numero Indorum, attributed toAdelard of Bath.[10]Here,alghoarismioralgorismiis theLatinizationof Al-Khwarizmi's name;[1]the text starts with the phraseDixit Algorismi, or "Thus spoke Al-Khwarizmi".[2] The wordalgorismin English came to mean the use of place-value notation in calculations; it occurs in theAncrene Wissefrom circa 1225.[11]By the timeGeoffrey ChaucerwroteThe Canterbury Talesin the late 14th century, he used a variant of the same word in describingaugrym stones, stones used for place-value calculation.[12][13]In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number";cf."arithmetic"), the Latin word was altered toalgorithmus.[14]By 1596, this form of the word was used in English, asalgorithm, byThomas Hood.[15] One informal definition is "a set of rules that precisely defines a sequence of operations",[16]which would include allcomputer programs(including programs that do not perform numeric calculations), and any prescribedbureaucraticprocedure[17]orcook-bookrecipe.[18]In general, a program is an algorithm only if it stops eventually[19]—even thoughinfinite loopsmay sometimes prove desirable.Boolos, Jeffrey & 1974, 1999define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.[20] Most algorithms are intended to beimplementedascomputer programs. However, algorithms are also implemented by other means, such as in abiological neural network(for example, thehuman brainperformingarithmeticor an insect looking for food), in anelectrical circuit, or a mechanical device. Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes inBabylonian mathematics(around 2500 BC),[21]Egyptian mathematics(around 1550 BC),[21]Indian mathematics(around 800 BC and later),[22][23]the Ifa Oracle (around 500 BC),[24]Greek mathematics(around 240 BC),[25]Chinese mathematics (around 200 BC and later),[26]andArabic mathematics(around 800 AD).[27] The earliest evidence of algorithms is found in ancientMesopotamianmathematics. ASumerianclay tablet found inShuruppaknearBaghdadand dated toc.2500 BCdescribes the earliestdivision algorithm.[21]During theHammurabi dynastyc.1800– c.1600 BC,Babylonianclay tablets described algorithms for computing formulas.[28]Algorithms were also used inBabylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.[29] Algorithms for arithmetic are also found in ancientEgyptian mathematics, dating back to theRhind Mathematical Papyrusc.1550 BC.[21]Algorithms were later used in ancientHellenistic mathematics. Two examples are theSieve of Eratosthenes, which was described in theIntroduction to ArithmeticbyNicomachus,[30][25]: Ch 9.2and theEuclidean algorithm, which was first described inEuclid's Elements(c.300 BC).[25]: Ch 9.1Examples of ancient Indian mathematics included theShulba Sutras, theKerala School, and theBrāhmasphuṭasiddhānta.[22] The first cryptographic algorithm for deciphering encrypted code was developed byAl-Kindi, a 9th-century Arab mathematician, inA Manuscript On Deciphering Cryptographic Messages. He gave the first description ofcryptanalysisbyfrequency analysis, the earliest codebreaking algorithm.[27] Bolter credits the invention of the weight-driven clock as "the key invention [ofEurope in the Middle Ages]," specifically theverge escapementmechanism[31]producing the tick and tock of a mechanical clock. "The accurate automatic machine"[32]led immediately to "mechanicalautomata" in the 13th century and "computational machines"—thedifferenceandanalytical enginesofCharles BabbageandAda Lovelacein the mid-19th century.[33]Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a realTuring-completecomputer instead of just acalculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer". Bell and Newell (1971) write that theJacquard loom, a precursor toHollerith cards(punch cards), and "telephone switching technologies" led to the development of the first computers.[34]By the mid-19th century, thetelegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, theticker tape(c.1870s) was in use, as were Hollerith cards (c. 1890). Then came theteleprinter(c.1910) with its punched-paper use ofBaudot codeon tape. Telephone-switching networks ofelectromechanical relayswere invented in 1835. These led to the invention of the digital adding device byGeorge Stibitzin 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".[35][36] In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve theEntscheidungsproblem(decision problem) posed byDavid Hilbert. Later formalizations were framed as attempts to define "effective calculability"[37]or "effective method".[38]Those formalizations included theGödel–Herbrand–Kleenerecursive functions of 1930, 1934 and 1935,Alonzo Church'slambda calculusof 1936,Emil Post'sFormulation 1of 1936, andAlan Turing'sTuring machinesof 1936–37 and 1939. Algorithms can be expressed in many kinds of notation, includingnatural languages,pseudocode,flowcharts,drakon-charts,programming languagesorcontrol tables(processed byinterpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms. There are many possible representations andTuring machineprograms can be expressed as a sequence of machine tables (seefinite-state machine,state-transition table, andcontrol tablefor more), as flowcharts and drakon-charts (seestate diagramfor more), as a form of rudimentarymachine codeorassembly codecalled "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description.[39]A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine.[39]An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states.[39]In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.[39] The graphical aid called aflowchartoffers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list ofnnumbers would have a time requirement of⁠O(n){\displaystyle O(n)}⁠, usingbig O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of⁠O(1){\displaystyle O(1)}⁠, otherwise⁠O(n){\displaystyle O(n)}⁠is required. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, abinary searchalgorithm (with cost⁠O(log⁡n){\displaystyle O(\log n)}⁠) outperforms a sequential search (cost⁠O(n){\displaystyle O(n)}⁠) when used fortable lookupson sorted lists or arrays. Theanalysis, and study of algorithmsis a discipline ofcomputer science. Algorithms are often studied abstractly, without referencing any specificprogramming languageor implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation.Pseudocodeis typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and theiralgorithmic efficiencyis tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful for uncovering unexpected interactions that affect performance.Benchmarksmay be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.[40] To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating toFFTalgorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging.[41]In general, speed improvements depend on special properties of the problem, which are very common in practical applications.[42]Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such asdivide-and-conquerordynamic programmingwithinoperation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns,[43]with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; thebig O notationis used to describe e.g., an algorithm's run-time growth as the size of its input increases.[44] Per theChurch–Turing thesis, any algorithm can be computed by anyTuring completemodel. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language".[45]Tausworthe augments the threeBöhm-Jacopini canonical structures:[46]SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE.[47]An additional benefit of a structured program is that it lends itself toproofs of correctnessusingmathematical induction.[48] By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as inGottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, inDiamond v. Diehr, the application of a simplefeedbackalgorithm to aid in the curing ofsynthetic rubberwas deemed patentable. Thepatenting of softwareis controversial,[49]and there are criticized patents involving algorithms, especiallydata compressionalgorithms, such asUnisys'sLZW patent. Additionally, some cryptographic algorithms have export restrictions (seeexport of cryptography). Another way of classifying algorithms is by their design methodology orparadigm. Some common paradigms are: Foroptimization problemsthere is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as: High-level description: (Quasi-)formal description:Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm inpseudocodeorpidgin code:
https://en.wikipedia.org/wiki/Algorithm
Intopological data analysis,persistent homologyis a method for computing topological features of a space at different spatial resolutions. More persistent features are detected over a wide range of spatial scales and are deemed more likely to represent true features of the underlying space rather than artifacts of sampling, noise, or particular choice of parameters.[1] To find the persistent homology of a space, the space must first be represented as asimplicial complex. A distance function on the underlying space corresponds to afiltrationof the simplicial complex, that is a nested sequence of increasing subsets. One common method of doing this is via taking the sublevel filtration of the distance to apoint cloud, or equivalently, theoffset filtrationon the point cloud and taking itsnervein order to get the simplicial filtration known asČechfiltration.[2]A similar construction uses a nested sequence ofVietoris–Rips complexesknown as theVietoris–Rips filtration.[3] Formally, consider a real-valued function on a simplicial complexf:K→R{\displaystyle f:K\rightarrow \mathbb {R} }that is non-decreasing on increasing sequences of faces, sof(σ)≤f(τ){\displaystyle f(\sigma )\leq f(\tau )}wheneverσ{\displaystyle \sigma }is a face ofτ{\displaystyle \tau }inK{\displaystyle K}. Then for everya∈R{\displaystyle a\in \mathbb {R} }thesublevel setKa=f−1((−∞,a]){\displaystyle K_{a}=f^{-1}((-\infty ,a])}is a subcomplex ofK, and the ordering of the values off{\displaystyle f}on the simplices inK{\displaystyle K}(which is in practice always finite) induces an ordering on the sublevel complexes that defines a filtration When0≤i≤j≤n{\displaystyle 0\leq i\leq j\leq n}, the inclusionKi↪Kj{\displaystyle K_{i}\hookrightarrow K_{j}}induces ahomomorphismfpi,j:Hp(Ki)→Hp(Kj){\displaystyle f_{p}^{i,j}:H_{p}(K_{i})\rightarrow H_{p}(K_{j})}on thesimplicial homologygroups for each dimensionp{\displaystyle p}. Thepth{\displaystyle p^{\text{th}}}persistent homology groupsare the images of these homomorphisms, and thepth{\displaystyle p^{\text{th}}}persistent Betti numbersβpi,j{\displaystyle \beta _{p}^{i,j}}are theranksof those groups.[4]Persistent Betti numbers forp=0{\displaystyle p=0}coincide with thesize function, a predecessor of persistent homology.[5] Any filtered complex over a fieldF{\displaystyle F}can be brought by a linear transformation preserving the filtration to so calledcanonical form, a canonically defined direct sum of filtered complexes of two types: one-dimensional complexes with trivial differentiald(eti)=0{\displaystyle d(e_{t_{i}})=0}and two-dimensional complexes with trivial homologyd(esj+rj)=erj{\displaystyle d(e_{s_{j}+r_{j}})=e_{r_{j}}}.[6] Apersistence moduleover apartially orderedsetP{\displaystyle P}is a set of vector spacesUt{\displaystyle U_{t}}indexed byP{\displaystyle P}, with a linear maputs:Us→Ut{\displaystyle u_{t}^{s}:U_{s}\to U_{t}}whenevers≤t{\displaystyle s\leq t}, withutt{\displaystyle u_{t}^{t}}equal to the identity anduts∘usr=utr{\displaystyle u_{t}^{s}\circ u_{s}^{r}=u_{t}^{r}}forr≤s≤t{\displaystyle r\leq s\leq t}. Equivalently, we may consider it as afunctorfromP{\displaystyle P}considered as a category to the category of vector spaces (orR{\displaystyle R}-modules). There is a classification of persistence modules over a fieldF{\displaystyle F}indexed byN{\displaystyle \mathbb {N} }:U≃⨁ixti⋅F[x]⊕(⨁jxrj⋅(F[x]/(xsj⋅F[x]))).{\displaystyle U\simeq \bigoplus _{i}x^{t_{i}}\cdot F[x]\oplus \left(\bigoplus _{j}x^{r_{j}}\cdot (F[x]/(x^{s_{j}}\cdot F[x]))\right).}Multiplication byx{\displaystyle x}corresponds to moving forward one step in the persistence module. Intuitively, the free parts on the right side correspond to the homology generators that appear at filtration levelti{\displaystyle t_{i}}and never disappear, while the torsion parts correspond to those that appear at filtration levelrj{\displaystyle r_{j}}and last forsj{\displaystyle s_{j}}steps of the filtration (or equivalently, disappear at filtration levelsj+rj{\displaystyle s_{j}+r_{j}}).[7][6] Each of these two theorems allows us to uniquely represent the persistent homology of a filtered simplicial complex with apersistence barcodeorpersistence diagram. A barcode represents each persistent generator with a horizontal line beginning at the first filtration level where it appears, and ending at the filtration level where it disappears, while a persistence diagram plots a point for each generator with its x-coordinate the birth time and its y-coordinate the death time. Equivalently the same data is represented by Barannikov'scanonical form,[6]where each generator is represented by a segment connecting the birth and the death values plotted on separate lines for eachp{\displaystyle p}. Persistent homology is stable in a precise sense, which provides robustness against noise. Thebottleneck distanceis a natural metric on the space of persistence diagrams given byW∞(X,Y):=infφ:X→Ysupx∈X‖x−φ(x)‖∞,{\displaystyle W_{\infty }(X,Y):=\inf _{\varphi :X\to Y}\sup _{x\in X}\Vert x-\varphi (x)\Vert _{\infty },}whereφ{\displaystyle \varphi }ranges over bijections. A small perturbation in the input filtration leads to a small perturbation of its persistence diagram in the bottleneck distance. For concreteness, consider a filtration on a spaceX{\displaystyle X}homeomorphic to a simplicial complex determined by the sublevel sets of a continuous tame functionf:X→R{\displaystyle f:X\to \mathbb {R} }. The mapD{\displaystyle D}takingf{\displaystyle f}to the persistence diagram of itsk{\displaystyle k}th homology is 1-Lipschitzwith respect to thesup{\displaystyle \sup }-metric on functions and the bottleneck distance on persistence diagrams. That is,W∞(D(f),D(g))≤‖f−g‖∞{\displaystyle W_{\infty }(D(f),D(g))\leq \lVert f-g\rVert _{\infty }}.[8] The principal algorithm is based on the bringing of the filtered complex to itscanonical formby upper-triangular matrices and runs in worst-case cubical complexity in the number of simplices.[6]The fastest known algorithm for computing persistent homology runs in matrix multiplication time.[9] Since the number of simplices is highly relevant for computation time, finding filtered simplicial complexes with few simplexes is an active research area. Several approaches have been proposed to reduce the number of simplices in a filtered simplicial complex in order to approximate persistent homology.[10][11][12][13] There are various software packages for computing persistence intervals of a finite filtration.[14]
https://en.wikipedia.org/wiki/Persistent_homology
Thezero-forcing equalizeris a form of linearequalizationalgorithmused incommunication systemswhich applies the inverse of thefrequency responseof the channel. This form of equalizer was first proposed byRobert Lucky. The zero-forcing equalizer applies the inverse of the channel frequency response to the received signal, to restore the signal after the channel.[1]It has many useful applications. For example, it is studied heavily forIEEE 802.11n(MIMO) where knowing the channel allows recovery of the two or more streams which will be received on top of each other on each antenna. The namezero-forcing correspondsto bringing down theintersymbol interference(ISI) to zero in a noise-free case. This will be useful when ISI is significant compared to noise. For a channel withfrequency responseF(f){\displaystyle F(f)}the zero-forcing equalizerC(f){\displaystyle C(f)}is constructed byC(f)=1/F(f){\displaystyle C(f)=1/F(f)}. Thus the combination of channel and equalizer gives a flat frequency response and linear phaseF(f)C(f)=1{\displaystyle F(f)C(f)=1}. In reality, zero-forcing equalization does not work in most applications, for the following reasons: This second item is often the more limiting condition. These problems are addressed in the linearMMSEequalizer[2]by making a small modification to the denominator ofC(f){\displaystyle C(f)}:C(f)=1/(F(f)+k){\displaystyle C(f)=1/(F(f)+k)}, where k is related to the channel response and the signalSNR. If the channel response (orchannel transfer function) for a particular channel is H(s) then the input signal is multiplied by thereciprocalof it. This is intended to remove the effect of channel from the received signal, in particular theintersymbol interference(ISI). The zero-forcing equalizer removes all ISI, and is ideal when the channel is noiseless. However, when the channel is noisy, the zero-forcing equalizer will amplify the noise greatly at frequenciesfwhere the channel response H(j2πf) has a small magnitude (i.e. near zeroes of the channel) in the attempt to invert the channel completely. A more balanced linear equalizer in this case is theminimum mean-square errorequalizer, which does not usually eliminate ISI completely but instead minimizes the total power of the noise and ISI components in the output.
https://en.wikipedia.org/wiki/Zero-forcing_equalizer
This is a list ofgenetic algorithm(GA) applications.
https://en.wikipedia.org/wiki/List_of_genetic_algorithm_applications
Thedashis apunctuationmark consisting of a long horizontal line. It is similar in appearance to thehyphenbut is longer and sometimes higher from thebaseline. The most common versions are theendash–, generally longer than the hyphen but shorter than theminus sign; theemdash—, longer than either the en dash or the minus sign; and thehorizontalbar―, whose length varies acrosstypefacesbut tends to be between those of theenandemdashes.[a] Typical uses of dashes are to mark a break in a sentence, to set off an explanatory remark (similar to parenthesis), or to show spans of time or ranges of values. The em dash is sometimes used as a leading character to identify the source of a quoted text. In the early 17th century, inOkes-printedplaysofWilliam Shakespeare, dashes are attested that indicate a thinking pause, interruption, mid-speech realization, or change of subject.[1]The dashes are variously longer⸺(as inKing Learreprinted 1619) or composed of hyphens---(as inOthelloprinted 1622); moreover, the dashes are often, but not always, prefixed by a comma, colon, or semicolon.[2][3][1][4] In 1733, inJonathan Swift'sOn Poetry, the termsbreakanddashare attested for⸺and—marks:[5] Blot out, correct, insert, refine,Enlarge, diminish, interline;Be mindful, when Invention fails;To scratch your Head, and bite your Nails.Your poem finish'd, next your CareIs needful, to transcribe it fair.In modern Wit all printed Trash, isSet off with num'rousBreaks⸺andDashes— Usage varies both within English and within other languages, but the usual conventions for the most common dashes in printed English text are these: Glitter, felt, yarn, and buttons—his kitchen looked as if a clown had exploded.A flock of sparrows—some of them juveniles—alighted and sang. Glitter, felt, yarn, and buttons – his kitchen looked as if a clown had exploded.A flock of sparrows – some of them juveniles – alighted and sang. The French and Indian War (1754–1763) was fought in western Pennsylvania and along the present US–Canada border Seven social sins: politics without principles, wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, and worship without sacrifice. Thefigure dash‒(U+2012‒FIGURE DASH) has the same width as a numerical digit. (Manycomputer fontshave digits of equal width.[9]) It is used within numbers, such as the phone number 555‒0199, especially in columns so as to maintain alignment. In contrast, the en dash–(U+2013–EN DASH) is generally used for a range of values.[10] Theminus sign−(U+2212−MINUS SIGN)glyphis generally set a little higher, so as to be level with the horizontal bar of theplus sign. In informal usage, thehyphen-minus-(U+002D-HYPHEN-MINUS), provided as standard on most keyboards, is often used instead of the figure dash. InTeX, the standard fonts have no figure dash; however, the digits normally all have the same width as the en dash, so an en dash can be a substitution for the figure dash. InXeLaTeX, one can use\char"2012.[11]TheLinux Libertinefont also has the figure dash glyph. Theen dash,en rule, ornut dash[12]–is traditionally half the width of anem dash.[13][14]In modern fonts, the length of the en dash is not standardized, and the en dash is often more than half the width of the em dash.[15]The widths of en and em dashes have also been specified as being equal to those of the uppercase letters N and M, respectively,[16][17]and at other times to the widths of the lower-case letters.[15][18] The three main uses of the en dash are: The en dash is commonly used to indicate a closed range of values – a range with clearly defined and finite upper and lower boundaries – roughly signifying what might otherwise be communicated by the word "through" in American English, or "to" in International English.[19]This may include ranges such as those between dates, times, or numbers.[20][21][22][23]Variousstyle guidesrestrict this range indication style to only parenthetical or tabular matter, requiring "to" or "through" in running text. Preference for hyphen vs. en dash in ranges varies. For example, theAPA style(named after the American Psychological Association) uses an en dash in ranges, but theAMA style(named after the American Medical Association) uses a hyphen: Some style guides (including theGuide for the Use of the International System of Units (SI)and theAMA Manual of Style) recommend that, when a number range might be misconstrued as subtraction, the word "to" should be used instead of an en dash. For example, "a voltage of 50 V to 100 V" is preferable to using "a voltage of 50–100 V". Relatedly, in ranges that include negative numbers, "to" is used to avoid ambiguity or awkwardness (for example, "temperatures ranged from −18°C to −34°C"). It is also considered poor style (best avoided) to use the en dash in place of the words "to" or "and" in phrases that follow the formsfrom X to Yandbetween X and Y.[21][22] The en dash is used to contrast values or illustrate a relationship between two things.[20][23]Examples of this usage include: A distinction is often made between "simple" attributive compounds (written with a hyphen) and other subtypes (written with an en dash); at least one authority considers name pairs, where the paired elements carry equal weight, as in theTaft–Hartley Actto be "simple",[21]while others consider an en dash appropriate in instances such as these[24][25][26]to represent the parallel relationship, as in theMcCain–Feingold billorBose–Einstein statistics. When an act of the U.S. Congress is named using the surnames of the senator and representative who sponsored it, the hyphen-minus is used in theshort title; thus, the short title ofPublic Law 111–203is "The Dodd-Frank Wall Street Reform and Consumer Protection Act", with ahyphen-minusrather than an en dash between "Dodd" and "Frank".[27]However, there is a difference between something named for a parallel/coordinate relationship between two people – for example,Satyendra Nath BoseandAlbert Einstein– and something named for a single person who had acompound surname, which may be written with a hyphen or a space but not an en dash – for example, theLennard-Jones potential[hyphen] is named after one person (John Lennard-Jones), as areBence Jones proteinsandHughlings Jackson syndrome. Copyeditors use dictionaries (general, medical, biographical, and geographical) to confirm theeponymity(and thus the styling) for specific terms, given that no one can know them all offhand. Preference for an en dash instead of a hyphen in these coordinate/relationship/connection types of terms is a matter of style, not inherent orthographic "correctness"; both are equally "correct", and each is the preferred style in some style guides. For example,the American Heritage Dictionary of the English Language, theAMA Manual of Style, andDorland's medical reference worksuse hyphens, not en dashes, in coordinate terms (such as "blood-brain barrier"), ineponyms(such as "Cheyne-Stokes respiration", "Kaplan-Meier method"), and so on. In other styles, AP Style or Chicago Style, the en dash is used to describe two closely related entities in a formal manner. In English, the en dash is usually used instead of ahyphenincompound (phrasal) attributivesin which one or both elements is itself a compound, especially when the compound element is anopen compound, meaning it is not itself hyphenated. This manner of usage may include such examples as:[21][22][28][29] The disambiguating value of the en dash in these patterns was illustrated by Strunk and White inThe Elements of Stylewith the following example: WhenChattanooga NewsandChattanooga Free Pressmerged, the joint company was inaptly namedChattanooga News-Free Press(using a hyphen), which could be interpreted as meaning that their newspapers were news-free.[30] An exception to the use of en dashes is usually made whenprefixingan already-hyphenated compound; an en dash is generally avoided as a distraction in this case. Examples of this include:[30] An en dash can be retained to avoid ambiguity, but whether any ambiguity is plausible is a judgment call.AMA styleretains the en dashes in the following examples:[31] As discussed above, the en dash is sometimes recommended instead of a hyphen incompound adjectiveswhere neither part of the adjective modifies the other—that is, when each modifies the noun, as inlove–hate relationship. The Chicago Manual of Style(CMOS), however, limits the use of the en dash to two main purposes: That is, theCMOSfavors hyphens in instances where some other guides suggest en dashes, with the 16th edition explaining that "Chicago's sense of the en dash does not extend tobetween", to rule out its use in "US–Canadian relations".[33] In these two uses, en dashes normally do not have spaces around them. Some make an exception when they believe avoiding spaces may cause confusion or look odd. For example, compare"12 June – 3 July"with"12 June–3 July".[34]However, other authorities disagree and state there should be no space between an en dash and adjacent text. These authorities would not use a space in, for example,"11:00 a.m.⁠–⁠1:00 p.m."[35]or"July 9–August 17".[36][37] En dashes can be used instead of pairs of commas that mark off a nested clause or phrase. They can also be used around parenthetical expressions – such as this one – rather than the em dashes preferred by some publishers.[38][8] The en dash can also signify a rhetorical pause. For example, anopinion piecefromThe Guardianis entitled: Who is to blame for the sweltering weather? My kids say it's boomers – and me[39] In these situations, en dashes must have a single space on each side.[8] In most uses of en dashes, such as when used in indicating ranges, they are typeset closed up to the adjacent words or numbers. Examples include "the 1914–18war" or "the Dover–Calais crossing". It is only when en dashes are used in setting off parenthetical expressions – such as this one – that they take spaces around them.[40]For more on the choice of em versus en in this context, seeEn dash versus em dash. When an en dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—there are some conventional substitutions. Often two consecutive hyphens are the substitute. The en dash is encoded in Unicode as U+2013 (decimal 8211) and represented in HTML by thenamed character entity–. The en dash is sometimes used as a substitute for theminus sign, when the minus sign character is not available since the en dash is usually the same width as a plus sign and is often available when the minus sign is not; seebelow. For example, the original 8-bitMacintosh Character Sethad an en dash, useful for the minus sign, years before Unicode with a dedicated minus sign was available. The hyphen-minus is usually too narrow to make a typographically acceptable minus sign. However, the en dash cannot be used for a minus sign inprogramming languagesbecause the syntax usually requires a hyphen-minus. Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list. Theem dash,em rule, ormutton dash[12]—is longer than anen dash. The character is called anem dashbecause it is oneemwide, a length that varies depending on the font size. One em is the same length as the font's height (which is typically measured inpoints). So in 9-point type, an em dash is nine points wide, while in 24-point type the em dash is 24 points wide. By comparison, the en dash, with its1enwidth, is in mostfontseither a half-em wide[41]or the width of an upper-case "N".[42] The em dash is encoded in Unicode as U+2014 (decimal 8212) and represented in HTML by the named character entity—. The em dash is used in several ways. It is primarily used in places where a set ofparenthesesor acolonmight otherwise be used,[43][full citation needed]and it can also show an abrupt change in thought (or an interruption in speech) or be used where afull stop(period) is too strong and acommais too weak (similar to that of a semicolon). Em dashes are also used to set off summaries or definitions.[44]Common uses and definitions are cited below with examples. It may indicate an interpolation stronger than that demarcated by parentheses, as in the following fromNicholson Baker'sThe Mezzanine(the degree of difference is subjective). In a related use, it may visually indicate the shift between speakers when they overlap in speech. For example, the em dash is used this way inJoseph Heller'sCatch-22: Lord Cardinal! if thou think'st on heaven's bliss,Hold up thy hand, make signal of that hope.—He dies, and makes no sign! This is aquotation dash. It may be distinct from an em dash in its coding (seehorizontal bar). It may be used to indicate turns in a dialogue, in which case each dash starts a paragraph.[46]It replaces other quotation marks and was preferred by authors such asJames Joyce:[47] The Walrus and the CarpenterWere walking close at hand;They wept like anything to seeSuch quantities of sand:"If this were only cleared away,"They said, "it would be grand!" An em dash may be used to indicate omitted letters in a word redacted to an initial or single letter or tofilleta word, by leaving the start and end letters whilst replacing the middle letters with a dash or dashes (forcensorshipor simplydata anonymization). It may also censor the end letter. In this use, it is sometimes doubled. Three em dashes might be used to indicate a completely missing word.[48] Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list, but a plain hyphen is more commonly used. Three em dashes one after another can be used in a footnote, endnote, or another form of bibliographic entry to indicate repetition of the same author's name as that of the previous work,[48]which is similar to the use ofid. According to most American sources (such asThe Chicago Manual of Style) and some British sources (such asThe Oxford Guide to Style), an em dash should always be set closed, meaning it should not be surrounded by spaces. But the practice in some parts of the English-speaking world, including the style recommended byThe New York Times Manual of Style and Usagefor printed newspapers and theAP Stylebook, sets it open, separating it from its surrounding words by using spaces orhair spaces(U+200A) when it is being used parenthetically.[49][50]TheAP Stylebookrejects the use of the open em dash to set off introductory items in lists. However, the "space, en dash, space" sequence is the predominant style in German and Frenchtypography. (SeeEn dash versus em dashbelow.) In Canada,The Canadian Style: A Guide to Writing and Editing,The Oxford Canadian A to Z of Grammar, Spelling & Punctuation: Guide to Canadian English Usage(2nd ed.),Editing Canadian English, and theCanadian Oxford Dictionaryall specify that an em dash should be set closed when used between words, a word and numeral, or two numerals. The Australian government'sStyle Manual for Authors, Editors and Printers(6th ed.), also specifies that em dashes inserted between words, a word and numeral, or two numerals, should be set closed. A section on the 2-em rule (⸺) also explains that the 2-em can be used to mark an abrupt break in direct or reported speech, but a space is used before the 2-em if a complete word is missing, while no space is used if part of a word exists before the sudden break. Two examples of this are as follows: When an em dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—it has usually beenapproximatedas consecutive double (--) or triple (---) hyphen-minuses. The two-hyphen em dash proxy is perhaps more common, being a widespread convention in thetypewritingera. (It is still described for hard copy manuscript preparation inThe Chicago Manual of Styleas of the 16th edition, although the manual conveys that typewritten manuscript and copyediting on paper are now dated practices.) The three-hyphen em dash proxy was popular with various publishers because the sequence of one, two, or three hyphens could then correspond to the hyphen, en dash, and em dash, respectively. Because early comic booklettererswere not aware of the typographic convention of replacing a typewritten double hyphen with an em dash, the double hyphen became traditional in American comics. This practice has continued despite the development of computer lettering.[51][52] The en dash is wider than thehyphenbut not as wide as the em dash. Anem widthis defined as the point size of the currently used font, since the M character is not always the width of the point size.[53]In running text, various dash conventions are employed: an em dash—like so—or a spaced em dash — like so — or a spaced en dash – like so – can be seen in contemporary publications. Various style guides and national varieties of languages prescribe different guidance on dashes. Dashes have been cited as being treated differently in the US and the UK, with the former preferring the use of an em dash with no additional spacing and the latter preferring a spaced en dash.[38]As examples of the US style,The Chicago Manual of StyleandThe Publication Manual of the American Psychological Associationrecommend unspaced em dashes. Style guides outside the US are more variable. For example,The Elements of Typographic Styleby Canadian typographerRobert Bringhurstrecommends the spaced en dash – like so – and argues that the length and visual magnitude of an em dash "belongs to the padded and corseted aesthetic of Victorian typography".[8]In the United Kingdom, the spaced en dash is the house style for certain major publishers, including thePenguin Group, theCambridge University Press, andRoutledge. However, this convention is not universal. TheOxford Guide to Style(2002, section 5.10.10) acknowledges that the spaced en dash is used by "other British publishers" but states that theOxford University Press, like "most US publishers", uses the unspaced em dash.Fowler's Modern English Usage, saying that it is summarising theNew Hart's Rules, describes the principal uses of the em dash as "a single dash used to introduce an explanation or expansion" and "a pair of dashes used to indicate asides and parentheses", without stipulating whether it should be spaced but giving only unspaced examples.[54] The en dash – always with spaces in running text when, as discussed in this section, indicating a parenthesis or pause – and the spaced em dash both have a certain technical advantage over the unspaced em dash. Most typesetting and word processing expects word spacing to vary to supportfull justification. Alone among punctuation that marks pauses or logical relations in text, the unspaced em dash disables this for the words it falls between. This can cause uneven spacing in the text, but can be mitigated by the use ofthin spaces,hair spaces, or evenzero-width spaceson the sides of the em dash. This provides the appearance of an unspaced em dash, but allows the words and dashes to break between lines. The spaced em dash risks introducing excessive separation of words. In full justification, the adjacent spaces may be stretched, and the separation of words further exaggerated. En dashes may also be preferred to em dashes when text is set in narrow columns, such as in newspapers and similar publications, since the en dash is smaller. In such cases, its use is based purely on space considerations and is not necessarily related to other typographical concerns. On the other hand, a spaced en dash may be ambiguous when it is also used for ranges, for example, in dates or between geographical locations with internal spaces. Thehorizontal bar(U+2015―HORIZONTAL BAR), also known as aquotation dash, is used to introduce quoted text. This is the standard method of printingdialoguein some languages. The em dash is equally suitable if the quotation dash is unavailable or is contrary to the house style being used. There is no support in the standard TeX fonts, but one can use\hbox{---}\kern-.5em---or an em dash. Theswung dash(U+2053⁓SWUNG DASH) resembles a lengthenedtildeand is used to separate alternatives or approximates. Indictionaries, it is frequently used to stand in for the term being defined. A dictionary entry providing an example for the termhenceforthmight employ the swung dash as follows: In the following tables, the "Em and 5×" column uses a capital M as a standard comparison to demonstrate the vertical position of different Unicode dash characters. "5×" means that there are five copies of this type of dash. This table lists characters with propertyDash=yesin Unicode.[55] This table lists characters similar to dashes, but with propertyDash=noin Unicode. In many languages, such asPolish, the em dash is used as an openingquotation mark. There is no matching closing quotation mark; typically a new paragraph will be started, introduced by a dash, for each turn in the dialogue.[citation needed] Corpusstudies indicate that em dashes are more commonly used in Russian than in English.[59]In Russian, the em dash is used for the presentcopula(meaning 'am/is/are'), which is unpronounced in spoken Russian. InFrenchandItalian, em or en dashes can be used asparentheses(brackets), but the use of a second dash as a closing parenthesis is optional. When a closing dash is not used, the sentence is ended with a period (full-stop) as usual. Dashes are, however, much less common than parentheses.[citation needed] InSpanish, em dashes can be used to mark off parenthetical phrases. Unlike in English, the em dashes are spaced like brackets, i.e., there is a space between main sentence and dash, but not between parenthetical phrase and dash.[60]For example: "Llevaba la fidelidad a su maestro—unbuenprofesor—hasta extremos insospechados." (In English: 'He took his loyalty to his teacher – a good teacher – to unsuspected extremes.')[61]
https://en.wikipedia.org/wiki/En_dash
Atouch switchis a type ofswitchthat only has to be touched by an object to operate. It is used in manylampsand wall switches that have a metal exterior as well as on public computer terminals. Atouchscreenincludes an array of touch switches on a display. A touch switch is the simplest kind oftactile sensor. There are three types of switches called touch switches: A self-capacitance switch needs only one electrode to function. The electrode can be placed behind a non-conductive panel such as wood, glass, or plastic. The switch works usingbody capacitance, a property of the human body that gives it great electrical characteristics. The switch keeps charging and discharging its metal exterior to detect changes incapacitance. When a person touches it, their body increases the capacitance and triggers the switch. Unlike self-capacitance, mutual capacitive touch is based on capacitance changes between two electrodes. This system employs two sets of electrodes—transmitting electrodes (Tx) and receiving electrodes (Rx). When a user’s finger or another object approaches these electrodes, it disrupts the electric field between them, resulting in a change incapacitancevalue. Mutual capacitance is also known as projected capacitance. The advantages of mutual capacitance technology include tight electric field coupling, allowing for more flexible design. For example, keyboards can have closely grouped keys without worrying about cross-coupling. However, mutual capacitance also has its limitations, such as its measurement noise being generally greater than self-capacitance. Capacitance switches are available commercially asintegrated circuitsfrom a number of manufacturers. These devices can also be used as a short-rangeproximity sensor. A resistance switch needs two electrodes to be physically in contact with something electrically conductive (for example a finger) to operate. They work by lowering the resistance between two pieces of metal. It is thus much simpler in construction compared to the capacitance switch. Placing one or two fingers across the plates achieves a turn on or closed state. Removing the finger(s) from the metal pieces turns the device off. One implementation of a resistance touch switch would be twoDarlington-pairedtransistors where the base of the first transistor is connected to one of the electrodes. Also, an N-channel, enhancement-mode, metal oxide field effect transistor can be used. Its gate can be connected to one of the electrodes and the other electrode through a resistance to a positive voltage. Piezotouch switches are based on mechanical bending of piezo ceramic, typically constructed directly behind a surface. This solution enables touch interfaces with any kind of material. Another characteristic of piezo is that it can function asactuatoras well. Current commercial solutions construct the piezo in such a way that touching it with approximately 1.5Nis enough, even for stiff materials like stainless steel. Piezo touch switches are available commercially. Piezo switches respond to a mechanicalforceapplied to the switch. The switch will operate regardless of whether force is applied through insulating or conducting materials. Capacitive switches respond to anelectric fieldapplied to the switch. The field will pass through thin gloves, but not through thick gloves.[1] Piezo switches usually cost more than capacitive switches.[1]
https://en.wikipedia.org/wiki/Touch_sensor
Inmathematicsandmathematical logic,Boolean algebrais a branch ofalgebra. It differs fromelementary algebrain two ways. First, the values of thevariablesare thetruth valuestrueandfalse, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra useslogical operatorssuch asconjunction(and) denoted as∧,disjunction(or) denoted as∨, andnegation(not) denoted as¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describinglogical operationsin the same way that elementary algebra describes numerical operations. Boolean algebra was introduced byGeorge Boolein his first bookThe Mathematical Analysis of Logic(1847),[1]and set forth more fully in hisAn Investigation of the Laws of Thought(1854).[2]According toHuntington, the termBoolean algebrawas first suggested byHenry M. Shefferin 1913,[3]althoughCharles Sanders Peircegave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4]Boolean algebra has been fundamental in the development ofdigital electronics, and is provided for in all modernprogramming languages. It is also used inset theoryandstatistics.[5] A precursor of Boolean algebra wasGottfried Wilhelm Leibniz'salgebra of concepts. The usage of binary in relation to theI Chingwas central to Leibniz'scharacteristica universalis. It eventually created the foundations of algebra of concepts.[6]Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7] Boole's algebra predated the modern developments inabstract algebraandmathematical logic; it is however seen as connected to the origins of both fields.[8]In an abstract setting, Boolean algebra was perfected in the late 19th century byJevons,Schröder,Huntingtonand others, until it reached the modern conception of an (abstract)mathematical structure.[8]For example, the empirical observation that one can manipulate expressions in thealgebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets isaBoolean algebra(note theindefinite article). In fact,M. H. Stoneproved in 1936that every Boolean algebra isisomorphicto afield of sets.[9][10] In the 1930s, while studyingswitching circuits,Claude Shannonobserved that one could also apply the rules of Boole's algebra in this setting,[11]and he introducedswitching algebraas a way to analyze and design circuits by algebraic means in terms oflogic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as thetwo-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[12][13][14] Efficient implementationofBoolean functionsis a fundamental problem in thedesignofcombinational logiccircuits. Modernelectronic design automationtools forvery-large-scale integration(VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered)binary decision diagrams(BDD) forlogic synthesisandformal verification.[15] Logic sentences that can be expressed in classicalpropositional calculushave anequivalent expressionin Boolean algebra. Thus,Boolean logicis sometimes used to denote propositional calculus performed in this way.[16][17][18]Boolean algebra is not sufficient to capture logic formulas usingquantifiers, like those fromfirst-order logic. Although the development ofmathematical logicdid not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting ofalgebraic logic, which also studies the algebraic systems of many other logics.[8]Theproblem of determining whetherthe variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called theBoolean satisfiability problem(SAT), and is of importance totheoretical computer science, being the first problem shown to beNP-complete. The closely relatedmodel of computationknown as aBoolean circuitrelatestime complexity(of analgorithm) tocircuit complexity. Whereas expressions denote mainlynumbersin elementary algebra, in Boolean algebra, they denote thetruth valuesfalseandtrue. These values are represented with thebits, 0 and 1. They do not behave like theintegers0 and 1, for which1 + 1 = 2, but may be identified with the elements of thetwo-element fieldGF(2), that is,integer arithmetic modulo 2, for which1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunctionx∨y(inclusive-or) definable asx+y−xyand negation¬xas1 −x. InGF(2),−may be replaced by+, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in whichGF(2)is not implemented). Boolean algebra also deals withfunctionswhich have their values in the set{0,1}. Asequence of bitsis a commonly used example of such a function. Another common example is the totality of subsets of a setE: to a subsetFofE, one can define theindicator functionthat takes the value1onF, and0outsideF. The most general example is the set elements of aBoolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.[19] While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations:conjunction,disjunction, andnegation, expressed with the correspondingbinary operatorsAND(∧{\displaystyle \land }) and OR (∨{\displaystyle \lor }) and theunary operatorNOT(¬{\displaystyle \neg }), collectively referred to asBoolean operators.[20]Variables in Boolean algebra that store the logical value of 0 and 1 are called theBoolean variables. They are used to store either true or false values.[21]The basic operations on Boolean variablesxandyare defined as follows: Alternatively, the values ofx∧y,x∨y, and ¬xcan be expressed by tabulating their values withtruth tablesas follows:[22] When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.[23] If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition andxyuses multiplication), or by the minimum/maximum functions: One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[24] Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Alawof Boolean algebra is anidentitysuch asx∨ (y∨z) = (x∨y) ∨zbetween two Boolean terms, where aBoolean termis defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of aBoolean algebraas anymodelof the Boolean laws, and as a means for deriving new laws from old as in the derivation ofx∨ (y∧z) =x∨ (z∧y)fromy∧z=z∧y(as treated in§ Axiomatizing Boolean algebra). Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[25][26] The following laws hold in Boolean algebra, but not in ordinary algebra: Takingx= 2in the third law above shows that it is not an ordinary algebra law, since2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to bemonotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.[5] The complement operation is defined by the following two laws. All properties of negation including the laws below follow from the above two laws alone.[5] In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) But whereasordinary algebrasatisfies the two laws Boolean algebra satisfiesDe Morgan's laws: The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The lawscomplementation1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possiblecompleteset of laws oraxiomatizationof Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as themodelsof these axioms as treated in§ Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in§ Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as anytautology, understood as an equation that holds for all values of its variables over 0 and 1.[27][28]All these definitions of Boolean algebra can be shown to be equivalent. Principle: If {X, R} is apartially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed toαandβ, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged,nowthere is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns forx∧yandx∨yin the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are calleddualto each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. Theduality principle, also calledDe Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is aself-dualoperation. The identity or do-nothing operationx(copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is(x∧y) ∨ (y∧z) ∨ (z∧x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, iff(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x), thenf(f(x,y,z),x,t)is a self-dual operation of four argumentsx,y,z,t. The principle of duality can be explained from agroup theoryperspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set ofBoolean polynomialsback to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form agroupunderfunction composition, isomorphic to theKlein four-group,actingon the set of Boolean polynomials.Walter Gottschalkremarked that consequently a more appropriate name for the phenomenon would be theprinciple(orsquare)of quaternality.[5]: 21–22 AVenn diagram[29]can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of regionxcorresponds respectively to the values 1 (true) and 0 (false) for variablex. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunctionx∧y, disjunctionx∨y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate thatx∧yis 1 when both variables are 1. The other regions are left unshaded to indicate thatx∧yis 0 for the other three combinations. The second diagram represents disjunctionx∨yby shading those regions that lie inside either or both circles. The third diagram represents complement ¬xby shading the regionnotinside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle forxin those boxes, in which case each would denote a function of one argument,x, which returns the same value independently ofx, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called azeroaryornullaryoperation, while a constant function takes one argument, which it ignores, and is aunaryoperation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchangingxandywould have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotenceof ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law,x∧ (x∨y) =x, start with the diagram in the middle forx∨yand note that the portion of the shaded area in common with thexcircle is the whole of thexcircle. For the second absorption law,x∨ (x∧y) =x, start with the left diagram forx∧yand note that shading the whole of thexcircle results in just thexcircle being shaded, since the previous shading was inside thexcircle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades thexcircle. To visualize the first De Morgan's law,(¬x) ∧ (¬y) = ¬(x∨y), start with the middle diagram forx∨yand complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside thexcircleandoutside theycircle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law,(¬x) ∨ (¬y) = ¬(x∧y), works the same way with the two diagrams interchanged. The first complement law,x∧ ¬x= 0, says that the interior and exterior of thexcircle have no overlap. The second complement law,x∨ ¬x= 1, says that everything is either inside or outside thexcircle. Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting oflogic gatesconnected to form acircuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[30] The lines on the left of each gate represent input wires orports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. Theduality principle, orDe Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namelyx,y, ¬x, and ¬y; and the remaining two arex⊕y(XOR) and its complementx≡y. The term "algebra" denotes both a subject, namely the subject ofalgebra, and an object, namely analgebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then givethe formal definitionof the general notion. Aconcrete Boolean algebraorfield of setsis any nonempty set of subsets of a given setXclosed under the set operations ofunion,intersection, andcomplementrelative toX.[5] (HistoricallyXitself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and letXbe empty.) Example 1.Thepower set2XofX, consisting of allsubsetsofX. HereXmay be any set: empty, finite, infinite, or evenuncountable. Example 2.The empty set andX. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets ofXmust contain the empty set andX. Hence no smaller example is possible, other than the degenerate algebra obtained by takingXto be empty so as to make the empty set andXcoincide. Example 3.The set of finite andcofinitesets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4.For a less trivial example of the point made by example 2, consider aVenn diagramformed bynclosed curvespartitioningthe diagram into 2nregions, and letXbe the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset ofX, and every point inXis in exactly one region. Then the set of all 22npossible unions of regions (including the empty set obtained as the union of the empty set of regions andXobtained as the union of all 2nregions) is closed under union, intersection, and complement relative toXand therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the casen= 0 of no curves. A subsetYofXcan be identified with anindexed familyof bits withindex setX, with the bit indexed byx∈Xbeing 1 or 0 according to whether or notx∈Y. (This is the so-calledcharacteristic functionnotion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if⁠X={a,b,c}{\displaystyle X=\{a,b,c\}}⁠wherea, b, care viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} ofXcan be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinitesequencesof bits, while those indexed by therealsin theunit interval[0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations ofbitwise∧, ∨, and ¬, as in1010∧0110 = 0010,1010∨0110 = 1110, and¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called theprototypicalBoolean algebra, justified by the following observation. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can beshownto satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a setX, two binary operations onX, and one unary operation, andrequirethat those operations satisfy the laws of Boolean algebra. The elements ofXneed not be bit vectors or subsets but can be anything at all. This leads to the more generalabstractdefinition. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axiomsby fiatis entirely analogous to the abstract definitions ofgroup,ring,fieldetc. characteristic of modern orabstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for acomplementeddistributive lattice, a sufficient condition for analgebraic structureof this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. The section onaxiomatizationlists other axiomatizations, any of which can be made the basis of an equivalent definition. Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Letnbe asquare-freepositive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations ofgreatest common divisor,least common multiple, and division inton(that is, ¬x=n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors ofn. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors ofna Boolean algebra that is not concrete according to our definitions. However, if each divisor ofnisrepresentedby the set of its prime factors, this nonconcrete Boolean algebra isisomorphicto the concrete Boolean algebra consisting of all sets of prime factors ofn, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division inton. So this example, while not technically concrete, is at least "morally" concrete via this representation, called anisomorphism. This example is an instance of the following notion. The next question is answered positively as follows. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on theBoolean prime ideal theorem, a choice principle slightly weaker than theaxiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example arelation algebrais a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to befinitely axiomatizableorfinitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as acomplementeddistributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing theSheffer strokeoperation, the single axiom((a∣b)∣c)∣(a∣((a∣c)∣a))=c{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; seeMinimal axioms for Boolean algebra.[32] Propositional logicis alogical systemthat is intimately connected to Boolean algebra.[5]Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to apropositional formulaof propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variablesx, y,... becomepropositional variables(oratoms)P, Q, ... Boolean terms such asx∨ybecome propositional formulasP∨Q; 0 becomesfalseor⊥, and 1 becomestrueorT. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talkingaboutpropositional calculus) to denote propositions. The semantics of propositional logic rely ontruth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then thetruth valueof a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while inBoolean-valued semanticsarbitrary Boolean algebras are considered. Atautologyis a propositional formula that is assigned truth value1by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language.[33]Whereas the proposition "ifx= 3, thenx+ 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "ifx= 3, thenx= 3" does not; it is true merely by virtue of its structure, and remains true whether "x= 3" is replaced by "x= 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "ifP, thenP," or in the language of Boolean algebra,P→P.[citation needed] ReplacingPbyx= 3 or any other proposition is calledinstantiationofPby that proposition. The result of instantiatingPin an abstract proposition is called aninstanceof the proposition. Thus,x= 3 →x= 3 is a tautology by virtue of being an instance of the abstract tautologyP→P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense asP→x= 3 orx= 3 →x= 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiatingQbyQ→PinP→ (Q→P) to yield the instanceP→ ((Q→P) →P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) An axiomatization of propositional calculus is a set of tautologies calledaxiomsand one or more inference rules for producing new tautologies from old. Aproofin an axiom systemAis a finite nonempty sequence of propositions each of which is either an instance of an axiom ofAor follows by some rule ofAfrom propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is thetheoremproved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization issoundwhen every theorem is a tautology, andcompletewhen every tautology is a theorem.[34] Propositional calculus is commonly organized as aHilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form issequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions calledsequents, such asA∨B,A∧C, ... ⊢A,B→C, ....The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ,A⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional propositionAappended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as theentailmentof the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binaryoperationthat returns a value in a Boolean algebra, the former is a binaryrelationwhich either holds or does not hold. In this sense, entailment is anexternalform of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined byx≤yjust whenx∨y=y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.[35] Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[5] In the early 20th century, several electrical engineers[who?]intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits.Claude Shannonformally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis,A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purposecomputersperform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: asvoltages on wiresin high-speed circuits and capacitive storage devices, as orientations of amagnetic domainin ferromagnetic storage devices, as holes inpunched cardsorpaper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming inmachine code,assembly language, and certain otherprogramming languages, programmers work with the low-level digital structure of thedata registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits asbinary numbers(base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of thecarryoperation in the first but not the second. Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, makingtwo-valued logicdeserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended tomulti-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 −x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined viaDe Morgan's law. Interpreting these values as logicaltruth valuesyields a multi-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. The original application for Boolean operations wasmathematical logic, where it combines the truth values, true or false, of individual formulas. Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies).But notis synonymous withand not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of theselogical connectivesoften have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, sinceandusually meansand thenin such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as inget dressed and go to school. Disjunctive commands suchlove me or leave meorfish or cut baittend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such astea and milkgenerally describe aggregation as with set union whiletea or milkis a choice. However, context can reverse these senses, as inyour choices are coffee and teawhich usually means the same asyour choices are coffee or tea(alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and althoughPnecessarily implies "not notP," the converse is suspect in English, much as withintuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. Boolean operations are used indigital logicto combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector ofnidentical binary gates are used to combine two bit vectors each ofnbits, the individual bit operations can be understood collectively as a single operation on values from aBoolean algebrawith 2nelements. Naive set theoryinterprets Boolean operations as acting on subsets of a given setX. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. The 256-element free Boolean algebra on three generators is deployed incomputer displaysbased onraster graphics, which usebit blitto manipulate whole regions consisting ofpixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called themask. Modernvideo cardsoffer all223= 256ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constantsSRC = 0xaaor0b10101010,DST = 0xccor0b11001100, andMSK = 0xf0or0b11110000allow Boolean operations such as(SRC^DST)&MSK(meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time,0x80in the(SRC^DST)&MSKexample,0x88if justSRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. Solid modelingsystems forcomputer aided designoffer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a setSofvoxels(the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets ofS, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operationx∧ ¬yorx−y, which in set theory is set difference, remove the elements ofyfrom those ofx. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported byGoogle.[NB 1]
https://en.wikipedia.org/wiki/Boolean_algebra
When a quantity grows towards asingularityunder a finite variation (a "finite-time singularity") it is said to undergohyperbolic growth.[1]More precisely, thereciprocal function1/x{\displaystyle 1/x}has ahyperbolaas a graph, and has a singularity at 0, meaning that thelimitasx→0{\displaystyle x\to 0}is infinite: any similar graph is said to exhibit hyperbolic growth. If the output of a function isinversely proportionalto its input, or inversely proportional to the difference from a given valuex0{\displaystyle x_{0}}, the function will exhibit hyperbolic growth, with a singularity atx0{\displaystyle x_{0}}. In the real world hyperbolic growth is created by certain non-linearpositive feedbackmechanisms.[2] Likeexponential growthandlogistic growth, hyperbolic growth is highlynonlinear, but differs in important respects. These functions can be confused, as exponential growth, hyperbolic growth, and the first half of logistic growth areconvex functions; however theirasymptotic behavior(behavior as input gets large) differs dramatically: A 1960 issue ofSciencemagazine included an article byHeinz von Foersterand his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958: Fifty years ago,Sciencepublished a study with the provocative title "Doomsday: Friday, 13 November, A.D. 2026". It fitted world population during the previous two millennia withP= 179 × 109/(2026.9 −t)0.99. This "quasi-hyperbolic" equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter. In 1975,von Hoernersuggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e.with the exponent in the denominator assumed to be 1.00): where Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP[4]to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671[5]) predicts a population of 5,986,622,074 for the beginning of the year 1997: The actual figure was 5,924,787,816.[5] Having analyzed the timing of the events plotted onRay Kurzweil's "Countdown to Singularity" graph,Andrey Korotayevarrived at the following best-fit equation: where Korotayev also analyzed the timing of the events on the list of sociotechnological phase transition points independently compiled by Alexander Panov, and arrived at the following best-fit equation: where Korotayev discovered that these two equations are entirely identical with von Foerster's doomsday equation describing the world population growth. Both empirical and mathematical analyses indicate that all the three hyperbolic equations describe the same global macrodevelopmental process, in which demography is indivisibly combined with technology.[4]It can be set forth as follows: technological advance → increase in thecarrying capacityof the Earth → population growth → more potential inventors → acceleration of technological advance → faster growth of the Earth's carrying capacity → faster population growth → faster growth of the number of potential inventors → faster technological advance → faster growth of the Earth's carrying capacity, and so on.[1][6] TheLorentz factorγis defined as[7]γ=11−(vc)2=11−β2,{\displaystyle \gamma ={\frac {1}{\sqrt {1-\left({\frac {v}{c}}\right)^{2}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}},}where Proxima Centauriis approximately 4.27 light-years away from the Earth. From a terrestrial observer's perspective, a traveller would cover the distance to Proxima Centauri in approximately 8.54 years at half the speed of light. However, due to the Lorentz factor, the time experienced by the traveller would be shorter: The following graph shows the journey times for twenty runs to Proxima Centauri from the ship viewpoint. Notice that as speeds approach the speed of light, the journey times reduce dramatically, even though the actual increments in speed appear slight. On the 20th run, at1048575/1048576of the speed of light, the distance shrinks to 0.0059 light-years and the traveller experiences a journey time of 2.15 days. Whereas to those on Earth the ship looks almost "frozen" and the journey still takes 4.27 years, plus a couple of days. The equation describing the growth of the Lorentz factor with speed is unmistakably hyperbolic, so the Lorentz factor of a spaceship, subjected to even a small but constant accelerating force, must become infinite in a finiteproper time. This requirement is met by assuming that a translationally accelerating spaceship loses its rest mass (which is the spaceship's resistance to its further translational acceleration along the path of flight):γ=mrelm0,{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}},}where Atv{\displaystyle v}= 0, themagnitude of the Lorentz factorisγ=mrelm0=11=1.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1}{1}}=1.}Atv{\displaystyle v}= 0.5c, themagnitude of the Lorentz factorisγ=mrelm0=1.0720.928=1.155.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.072}{0.928}}=1.155.}Atv{\displaystyle v}= 0.999c, themagnitude of the Lorentz factorisγ=mrelm0=1.9140.086=22.366.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.914}{0.086}}=22.366.}Following this pattern, the spaceship will, after a finiteproper time, turn into a beam of photons: Photons may be regarded as limiting particles whose rest mass has become zero while their Lorentz factor has become infinite. The light-speed spaceship will then cover the remaining distance to its destination in zero proper time: Since when traveling at the speed of light no apparent time elapses, the spacecraft would arrive instantly and simultaneously at all locations along the path of flight. Thus to the crew on the spacecraft, all spatial separations would collapse to zero along this path‑of‑flight. There is no relativistic dilatation, as all spatial separations are transverse to a light-speed spacecraft's flight. <...> Thus the spacecraft would disappear after reaching light speed, followed immediately by its reappearance trillions of miles away in the proximity of the target star, when the spacecraft returns to sub-light speed, Figure 9.6. The universe's matter is falling into the universe's gravitational field: Gravity rules. The moon orbiting Earth, matter falling into black holes, and the overall structure of the universe are dominated by gravity. Consequently, the universe's matter accelerates to ever greater speeds, so that its Lorentz factor hyperbolically increases to infinity, while its rest mass hyperbolically vanishes: As we go forwards in time, material weight continually changes into radiation. Conversely, as we go backwards in time, the total material weight of the universe must continually increase. At the end of the hyperbolic growth of its Lorentz factor, the universe's matter attains the speed of light: 'It all just seemed unbelievably boring to me,' Penrose says. Then he found something interesting within it: at the very end of the universe, the only remaining particles will be massless. That means everything that exists will travel at the speed of light, making the flow of time meaningless. So, the universe will eventually consist of relativistic kinetic energy, which isnegative,i.e.hierarchically binding/enslaving: A beam of negative energy that travels into the past can be generated by the acceleration of the source to high speeds. It is seen that the relativistic kinetic energy is always negative and therefore will lower the energy levels of a bound system. This hierarchically binding/enslavingnegative energy is the universe's spirit or information: Remember, more binding energy means the system is more bound—has greaternegativeenergy. The Spirit is the binding energy expressed by the wordre-ligio/religion—a word that itself reflects the brokenness and fragmentation of the universe, that God is trying to heal. Szilard's explanation was accepted by the physics community, andinformationwas accepted as a scientific concept, defined by its statistical‑mechanical properties as a kind ofnegative energythat introduced order into a system. Thus, the hyperbolic growth of the Lorentz factor of the universe's matter hierarchically binds/enslavesor, which is the same, animates/informs the universe's matter. The sociotechnological singularity of the terrestrial animated/informed matter, expected at the end of the year 2026 AD (seeGlobal macrodevelopment) will signify that the Lorentz factor of the universe's matter has become infinite—since the end of the year 2026 AD, the universe's matter will be falling into the universe's animating/informing gravitational field (which is the funnel-shapedgradientof matter's negative-energiedness, animateness, informedness) at the speed of light: The negative energy of the gravitational field is what allows negative entropy, equivalent to information, to grow, making the Universe a more complicated and interesting place. "It's this idea that we represent some kind of singularity, or that we announce the nearby presence of a singularity. That the evolution of life and cultural form and all that is clearlyfunnelingtoward something fairly unimaginable."—McKenna, Terence.A Weekend with Terence McKennaAugust 1993 "In other words, we end the whole thing. We collapse the state vector and everything goes into a state of novelty. What happens then I think isthe universe becomes entirely made of light."—McKenna, Terence.Appreciating Imagination1997 "The conventions of relativity say that time slows down as one approaches the speed of light, but if one tries to imagine the point of view of a thing made of light, one must realize that what is never mentioned is thatif one moves at the speed of light, there is no time whatsoever. There is an experience of time zero. <...> One has transited into the eternal mode. One is then apart from the moving image; one exists in the completion of eternity. I believe that this is what technology pushes toward."—McKenna, Terence.New Maps of Hyperspace1984 "What exactly is immortality? It's the negation of time. How do we negate time? By getting close to, and perhaps matching, the speed of light. If you ARE light, everything is instant."—TimefUSION Anomaly, 1999 10 11 "And the angel that I saw standing upon the sea and upon the land lifted his hand up to heaven, and swore by him who lives forevermore, who created heaven and the things that are in it, and the sea and the things that are in it, thattime shall be no more, but in the days of the voice of the seventh angel, when he begins to blow, even the mystery of God shall be finished, as he preached by his servants the prophets."—Revelation 10:5-7New Matthew Bible Another example of hyperbolic growth can be found inqueueing theory: the average waiting time of randomly arriving customers grows hyperbolically as a function of the average load ratio of the server. The singularity in this case occurs when the average amount of work arriving to the server equals the server's processing capacity. If the processing needs exceed the server's capacity, then there is no well-defined average waiting time, as the queue can grow without bound. A practical implication of this particular example is that for highly loaded queuing systems the average waiting time can be extremely sensitive to the processing capacity. A further practical example of hyperbolic growth can be found inenzyme kinetics. When the rate of reaction (termed velocity) between anenzymeandsubstrateis plotted against various concentrations of the substrate, a hyperbolic plot is obtained for many simpler systems. When this happens, the enzyme is said to followMichaelis-Mentenkinetics. The function exhibits hyperbolic growth with a singularity at timetc{\displaystyle t_{c}}:in thelimitast→tc{\displaystyle t\to t_{c}},the function goes to infinity. More generally, the function exhibits hyperbolic growth, whereK{\displaystyle K}is ascale factor. Note that this algebraic function can be regarded as an analytical solution for the function's differential:[1] This means that with hyperbolic growth the absolute growth rate of the variablexin the momenttis proportional to the square of the value ofxin the momentt. Respectively, the quadratic-hyperbolic function looks as follows:
https://en.wikipedia.org/wiki/Hyperbolic_growth
Automata theoryis the study ofabstract machinesandautomata, as well as thecomputational problemsthat can be solved using them. It is a theory intheoretical computer sciencewith close connections tocognitive scienceandmathematical logic. The wordautomatacomes from theGreekword αὐτόματος, which means "self-acting, self-willed, self-moving". Anautomaton(automata in plural) is an abstract self-propelledcomputing devicewhich follows a predetermined sequence of operations automatically. An automaton with a finite number ofstatesis called a finite automaton (FA) orfinite-state machine(FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists ofstates(represented in the figure by circles) andtransitions(represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to itstransition function, which takes the previous state and current input symbol as itsarguments. Automata theory is closely related toformal languagetheory. In this context, automata are used as finite representations of formal languages that may be infinite. Automata are often classified by the class of formal languages they can recognize, as in theChomsky hierarchy, which describes a nesting relationship between major classes of automata. Automata play a major role in thetheory of computation,compiler construction,artificial intelligence,parsingandformal verification. The theory of abstract automata was developed in the mid-20th century in connection withfinite automata.[1]Automata theory was initially considered a branch of mathematicalsystems theory, studying the behavior of discrete-parameter systems. Early work in automata theory differed from previous work on systems by usingabstract algebrato describeinformation systemsrather thandifferential calculusto describe material systems.[2]The theory of thefinite-state transducerwas developed under different names by different research communities.[3]The earlier concept ofTuring machinewas also included in the discipline along with new forms of infinite-state automata, such aspushdown automata. 1956 saw the publication ofAutomata Studies, which collected work by scientists includingClaude Shannon,W. Ross Ashby,John von Neumann,Marvin Minsky,Edward F. Moore, andStephen Cole Kleene.[4]With the publication of this volume, "automata theory emerged as a relatively autonomous discipline".[5]The book included Kleene's description of the set of regular events, orregular languages, and a relatively stable measure of complexity in Turing machine programs by Shannon.[6]In the same year,Noam Chomskydescribed theChomsky hierarchy, a correspondence between automata andformal grammars,[7]and Ross Ashby publishedAn Introduction to Cybernetics, an accessible textbook explaining automata and information using basicset theory. The study oflinear bounded automataled to theMyhill–Nerode theorem,[8]which gives a necessary and sufficient condition for a formal language to be regular, and an exact count of the number of states in a minimal machine for the language. Thepumping lemma for regular languages, also useful in regularity proofs, was proven in this period byMichael O. RabinandDana Scott, along with the computational equivalence of deterministic and nondeterministic finite automata.[9] In the 1960s, a body of algebraic results known as "structure theory" or "algebraic decomposition theory" emerged, which dealt with the realization of sequential machines from smaller machines by interconnection.[10]While any finite automaton can be simulated using auniversal gate set, this requires that the simulating circuit contain loops of arbitrary complexity. Structure theory deals with the "loop-free" realizability of machines.[5]The theory ofcomputational complexityalso took shape in the 1960s.[11][12]By the end of the decade, automata theory came to be seen as "the pure mathematics of computer science".[5] What follows is a general definition of an automaton, which restricts a broader definition of asystemto one viewed as acting in discrete time-steps, with its state behavior and outputs defined at each step by unchanging functions of only its state and input.[5] An automatonrunswhen it is given some sequence ofinputsin discrete (individual)time steps(or juststeps). An automaton processes one input picked from a set ofsymbolsorletters, which is called aninputalphabet. The symbols received by the automaton as input at any step are a sequence of symbols calledwords. An automaton has a set ofstates. At each moment during a run of the automaton, the automaton isinone of its states. When the automaton receives new input, it moves to another state (ortransitions) based on atransition functionthat takes the previous state and current input symbol as parameters. At the same time, another function called theoutput functionproduces symbols from theoutput alphabet, also according to the previous state and current input symbol. The automaton reads the symbols of the input word and transitions between states until the word is read completely, if it is finite in length, at which point the automatonhalts. A state at which the automaton halts is called thefinal state. To investigate the possible state/input/output sequences in an automaton usingformal languagetheory, a machine can be assigned astarting stateand a set ofaccepting states. Then, depending on whether a run starting from the starting state ends in an accepting state, the automaton can be said toacceptorrejectan input sequence. The set of all the words accepted by an automaton is called thelanguage recognized by the automaton. A familiar example of a machine recognizing a language is anelectronic lock, which accepts or rejects attempts to enter the correct code. Automata are defined to study useful machines under mathematical formalism. So the definition of an automaton is open to variations according to the "real world machine" that we want to model using the automaton. People have studied many variations of automata. The following are some popular variations in the definition of different components of automata. Different combinations of the above variations produce many classes of automata. Automata theory is a subject matter that studies properties of various types of automata. For example, the following questions are studied about a given type of automata. Automata theory also studies the existence or nonexistence of anyeffective algorithmsto solve problems similar to the following list: The following is an incomplete list of types of automata. Normally automata theory describes the states of abstract machines but there are discrete automata,analog automataorcontinuous automata, orhybrid discrete-continuous automata, which use digital data, analog data or continuous time, or digitalandanalog data, respectively. The following is an incomplete hierarchy in terms of powers of different types of virtual machines. The hierarchy reflects the nested categories of languages the machines are able to accept.[14] (same power)||{\displaystyle ||}(same power)Nondeterministic Finite Automaton(NFA)(above is weaker)∩{\displaystyle \cap }(below is stronger)Deterministic Push Down Automaton(DPDA-I)with 1 push-down store∩{\displaystyle \cap }Nondeterministic Push Down Automaton(NPDA-I)with 1 push-down store∩{\displaystyle \cap }Linear Bounded Automaton(LBA)∩{\displaystyle \cap }Deterministic Push Down Automaton(DPDA-II)with 2 push-down stores||{\displaystyle ||}Nondeterministic Push Down Automaton(NPDA-II)with 2 push-down stores||{\displaystyle ||}Deterministic Turing Machine(DTM)||{\displaystyle ||}Nondeterministic Turing Machine(NTM)||{\displaystyle ||}Probabilistic Turing Machine(PTM)||{\displaystyle ||}Multitape Turing Machine(MTM)||{\displaystyle ||}Multidimensional Turing Machine Each model in automata theory plays important roles in several applied areas.Finite automataare used intext processing, compilers, andhardware design.Context-free grammar(CFGs) are used inprogramming languagesand artificial intelligence. Originally, CFGs were used in the study ofhuman languages.Cellular automataare used in the field ofartificial life, the most famous example beingJohn Conway'sGame of Life. Some other examples which could be explained using automata theory in biology include mollusk and pine cone growth and pigmentation patterns. Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists. The idea originated in the work ofKonrad Zuse, and was popularized in America byEdward Fredkin. Automata also appear in the theory offinite fields: the set ofirreducible polynomialsthat can be written as composition of degree two polynomials is in fact a regular language.[15]Another problem for which automata can be used is theinduction of regular languages. Automata simulators are pedagogical tools used to teach, learn and research automata theory. An automata simulator takes as input the description of an automaton and then simulates its working for an arbitrary input string. The description of the automaton can be entered in several ways. An automaton can be defined in asymbolic languageor its specification may be entered in a predesigned form or its transition diagram may be drawn by clicking and dragging the mouse. Well known automata simulators include Turing's World, JFLAP, VAS, TAGS and SimStudio.[16] One can define several distinctcategoriesof automata[17]following the automata classification into different types described in the previous section. The mathematical category of deterministic automata,sequential machinesorsequential automata, and Turing machines withautomata homomorphismsdefining the arrows between automata is aCartesian closed category,[18]it has both categoricallimitsandcolimits. An automata homomorphism maps a quintuple of an automatonAionto the quintuple of another automatonAj. Automata homomorphisms can also be considered asautomata transformationsor assemigrouphomomorphisms, when the state space,S, of the automaton is defined as a semigroupSg.Monoidsare also considered as a suitable setting for automata inmonoidal categories.[19][20][21] One could also define avariable automaton, in the sense ofNorbert Wienerin his book onThe Human Use of Human Beingsviathe endomorphismsAi→Ai{\displaystyle A_{i}\to A_{i}}. Then one can show that such variable automata homomorphisms form a mathematical group. In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, avariable automatongroupoid. Therefore, in the most general case, categories of variable automata of any kind arecategories of groupoidsorgroupoid categories. Moreover, the category of reversible automata is then a2-category, and also a subcategory of the 2-category of groupoids, or the groupoid category.
https://en.wikipedia.org/wiki/Automata_theory
Inmathematics, particularly infunctional analysisandtopology,closed graphis a property offunctions.[1][2]A functionf:X→Ybetweentopological spaceshas aclosed graphif itsgraphis aclosed subsetof theproduct spaceX×Y. A related property isopen graph.[3] This property is studied because there are many theorems, known asclosed graph theorems, giving conditions under which a function with a closed graph is necessarilycontinuous. One particularly well-known class of closed graph theorems are theclosed graph theorems in functional analysis. We give the more general definition of when aY-valued function or set-valued function defined on asubsetSofXhas a closed graph since this generality is needed in the study ofclosed linear operatorsthat are defined on a dense subspaceSof atopological vector spaceX(and not necessarily defined on all ofX). This particular case is one of the main reasons why functions with closed graphs are studied in functional analysis. Note that we may define anopengraph, asequentially closedgraph, and a sequentially open graph in similar ways. When reading literature infunctional analysis, iff:X→Yis a linear map between topological vector spaces (TVSs) (e.g.Banach spaces) then "fis closed" will almost always means the following: Otherwise, especially in literature aboutpoint-set topology, "fis closed" may instead mean the following: These two definitions of "closed map" are not equivalent. If it is unclear, then it is recommended that a reader check how "closed map" is defined by the literature they are reading. Throughout, letXandYbe topological spaces. Iff:X→Yis a function then the following are equivalent: and ifYis aHausdorff spacethat iscompact, then we may add to this list: and if bothXandYarefirst-countablespaces then we may add to this list: Iff:X→Yis a function then the following are equivalent: IfF:X→ 2Yis a set-valued function between topological spacesXandYthen the following are equivalent: and ifYis compact and Hausdorff then we may add to this list: and if bothXandYare metrizable spaces then we may add to this list: Throughout, letX{\displaystyle X}andY{\displaystyle Y}be topological spaces andX×Y{\displaystyle X\times Y}is endowed with the product topology. Iff:X→Y{\displaystyle f:X\to Y}is a function then it is said to have aclosed graphif it satisfies any of the following are equivalent conditions: and ifY{\displaystyle Y}is a Hausdorff compact space then we may add to this list: and if bothX{\displaystyle X}andY{\displaystyle Y}arefirst-countablespaces then we may add to this list: Function with a sequentially closed graph Iff:X→Y{\displaystyle f:X\to Y}is a function then the following are equivalent: Conditions that guarantee that a function with a closed graph is necessarily continuous are calledclosed graph theorems. Closed graph theorems are of particular interest infunctional analysiswhere there are many theorems giving conditions under which alinear mapwith a closed graph is necessarily continuous. For examples in functional analysis, seecontinuous linear operator.
https://en.wikipedia.org/wiki/Closed_graph
Cluster(s)may refer to:
https://en.wikipedia.org/wiki/Cluster_(disambiguation)
Insoftware development, the programming languageJavawas historically considered slower than the fastestthird-generationtypedlanguages such asCandC++.[1]In contrast to those languages, Java compiles by default to aJava Virtual Machine(JVM) with operations distinct from those of the actual computer hardware. Early JVM implementations wereinterpreters; they simulated the virtual operations one-by-one rather than translating them intomachine codefor direct hardware execution. Since the late 1990s, the execution speed of Java programs improved significantly via introduction ofjust-in-time compilation(JIT) (in 1997 forJava 1.1),[2][3][4]the addition of language features supporting better code analysis, and optimizations in the JVM (such asHotSpotbecoming the default forSun's JVM in 2000). Sophisticatedgarbage collectionstrategies were also an area of improvement. Hardware execution of Java bytecode, such as that offered by ARM'sJazelle, was explored but not deployed. Theperformanceof aJava bytecodecompiled Java program depends on how optimally its given tasks are managed by the hostJava virtual machine(JVM), and how well the JVM exploits the features of thecomputer hardwareandoperating system(OS) in doing so. Thus, any Javaperformance testor comparison has to always report the version, vendor, OS and hardware architecture of the used JVM. In a similar manner, the performance of the equivalent natively compiled program will depend on the quality of its generated machine code, so the test or comparison also has to report the name, version and vendor of the used compiler, and its activatedcompiler optimizationdirectives. Many optimizations have improved the performance of the JVM over time. However, although Java was often the firstvirtual machineto implement them successfully, they have often been used in other similar platforms as well. Early JVMs always interpretedJava bytecodes. This had a large performance penalty of between a factor 10 and 20 for Java versus C in average applications.[5]To combat this, a just-in-time (JIT) compiler was introduced into Java 1.1. Due to the high cost of compiling, an added system calledHotSpotwas introduced in Java 1.2 and was made the default in Java 1.3. Using this framework, theJava virtual machinecontinually analyses program performance forhot spotswhich are executed frequently or repeatedly. These are then targeted foroptimizing, leading to high performance execution with a minimum ofoverheadfor less performance-critical code.[6][7]Some benchmarks show a 10-fold speed gain by this means.[8]However, due to time constraints, the compiler cannot fully optimize the program, and thus the resulting program is slower than native code alternatives.[9][10] Adaptive optimizing is a method in computer science that performsdynamic recompilationof parts of a program based on the current execution profile. With a simple implementation, an adaptive optimizer may simply make a trade-off between just-in-time compiling and interpreting instructions. At another level, adaptive optimizing may exploit local data conditions to optimize away branches and use inline expansion. AJava virtual machinelikeHotSpotcan alsodeoptimizecode formerly JITed. This allows performing aggressive (and potentially unsafe) optimizations, while still being able to later deoptimize the code and fall back to a safe path.[11][12] The 1.0 and 1.1Java virtual machines(JVMs) used amark-sweep collector, which could fragment theheapafter a garbage collection. Starting with Java 1.2, the JVMs changed to agenerational collector, which has a much better defragmentation behaviour.[13]Modern JVMs use a variety of methods that have further improvedgarbage collectionperformance.[14] Compressed Oops allow Java 5.0+ to address up to 32 GB of heap with 32-bit references. Java does not support access to individual bytes, only objects which are 8-byte aligned by default. Because of this, the lowest 3 bits of a heap reference will always be 0. By lowering the resolution of 32-bit references to 8 byte blocks, the addressable space can be increased to 32 GB. This significantly reduces memory use compared to using 64-bit references as Java uses references much more than some languages like C++. Java 8 supports larger alignments such as 16-byte alignment to support up to 64 GB with 32-bit references.[citation needed] Before executing aclass, the Sun JVM verifies itsJava bytecodes(seebytecode verifier). This verification is performed lazily: classes' bytecodes are only loaded and verified when the specific class is loaded and prepared for use, and not at the beginning of the program. However, as the Javaclass librariesare also regular Java classes, they must also be loaded when they are used, which means that the start-up time of a Java program is often longer than forC++programs, for example. A method namedsplit-time verification, first introduced in theJava Platform, Micro Edition(J2ME), is used in the JVM sinceJava version 6. It splits the verification ofJava bytecodein two phases:[15] In practice this method works by capturing knowledge that the Java compiler has of class flow and annotating the compiled method bytecodes with a synopsis of the class flow information. This does not makeruntime verificationappreciably less complex, but does allow some shortcuts.[citation needed] Java is able to managemultithreadingat the language level. Multithreading allows programs to perform multiple processes concurrently, thus improving the performance for programs running oncomputer systemswith multiple processors or cores. Also, a multithreaded application can remain responsive to input, even while performing long running tasks. However, programs that use multithreading need to take extra care ofobjectsshared between threads, locking access to sharedmethodsorblockswhen they are used by one of the threads. Locking a block or an object is a time-consuming operation due to the nature of the underlyingoperating system-level operation involved (seeconcurrency controlandlock granularity). As the Java library does not know which methods will be used by more than one thread, the standard library always locksblockswhen needed in a multithreaded environment. Before Java 6, the virtual machine alwayslockedobjects and blocks when asked to by the program, even if there was no risk of an object being modified by two different threads at once. For example, in this case, a localVectorwas locked before each of theaddoperations to ensure that it would not be modified by other threads (Vectoris synchronized), but because it is strictly local to the method this is needless: Starting with Java 6, code blocks and objects are locked only when needed,[16]so in the above case, the virtual machine would not lock the Vector object at all. Since version 6u23, Java includes support for escape analysis.[17] BeforeJava 6,allocation of registerswas very primitive in theclientvirtual machine (they did not live acrossblocks), which was a problem inCPU designswhich had fewerprocessor registersavailable, as inx86s. If there are no more registers available for an operation, the compiler mustcopy from register to memory(or memory to register), which takes time (registers are significantly faster to access). However, theservervirtual machine used acolor-graphallocator and did not have this problem. An optimization of register allocation was introduced in Sun's JDK 6;[18]it was then possible to use the same registers across blocks (when applicable), reducing accesses to the memory. This led to a reported performance gain of about 60% in some benchmarks.[19] Class data sharing (called CDS by Sun) is a mechanism which reduces the startup time for Java applications, and also reducesmemory footprint. When theJREis installed, the installer loads a set of classes from the systemJARfile (the JAR file holding all the Java class library, called rt.jar) into a private internal representation, and dumps that representation to a file, called a "shared archive". During subsequent JVM invocations, this shared archive ismemory-mappedin, saving the cost of loading those classes and allowing much of the JVM'smetadatafor these classes to be shared among multiple JVM processes.[20] The corresponding improvement in start-up time is more obvious for small programs.[21] Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Javaapplication programming interface(API). JDK 1.1.6: Firstjust-in-time compilation(Symantec's JIT-compiler)[2][22] J2SE 1.2: Use of agenerational collector. J2SE 1.3:Just-in-time compilingbyHotSpot. J2SE 1.4: Seehere, for a Sun overview of performance improvements between 1.3 and 1.4 versions. Java SE 5.0:Class data sharing[23] Java SE 6: Other improvements: See also 'Sun overview of performance improvements between Java 5 and Java 6'.[26] Several performance improvements have been released for Java 7: Future performance improvements are planned for an update of Java 6 or Java 7:[31] Objectively comparing the performance of a Java program and an equivalent one written in another language such asC++needs a carefully and thoughtfully constructed benchmark which compares programs completing identical tasks. The targetplatformof Java'sbytecodecompiler is theJava platform, and the bytecode is either interpreted or compiled into machine code by the JVM. Other compilers almost always target a specific hardware and software platform, producing machine code that will stay virtually unchanged during execution[citation needed]. Very different and hard-to-compare scenarios arise from these two different approaches: static vs.dynamic compilationsandrecompilations, the availability of precise information about the runtime environment and others. Java is oftencompiled just-in-timeat runtime by the Javavirtual machine, but may also becompiled ahead-of-time, as is C++. When compiled just-in-time, the micro-benchmarks ofThe Computer Language Benchmarks Gameindicate the following about its performance:[38] Benchmarks often measure performance for small numerically intensive programs. In some rare real-life programs, Java out-performs C. One example is the benchmark ofJake2(a clone ofQuake IIwritten in Java by translating the originalGPLC code). The Java 5.0 version performs better in some hardware configurations than its C counterpart.[42]While it is not specified how the data was measured (for example if the original Quake II executable compiled in 1997 was used, which may be considered bad as current C compilers may achieve better optimizations for Quake), it notes how the same Java source code can have a huge speed boost just by updating the VM, something impossible to achieve with a 100% static approach. For other programs, the C++ counterpart can, and usually does, run significantly faster than the Java equivalent. A benchmark performed by Google in 2011 showed a factor 10 between C++ and Java.[43]At the other extreme, an academic benchmark performed in 2012 with a 3D modelling algorithm showed theJava 6JVM being from 1.09 to 1.91 times slower than C++ under Windows.[44] Some optimizations that are possible in Java and similar languages may not be possible in certain circumstances in C++:[45] The JVM is also able to perform processor specific optimizations orinline expansion. And, the ability to deoptimize code already compiled or inlined sometimes allows it to perform more aggressive optimizations than those performed by statically typed languages when external library functions are involved.[46][47] Results formicrobenchmarksbetween Java and C++ highly depend on which operations are compared. For example, when comparing with Java 5.0: The scalability and performance of Java applications on multi-core systems is limited by the object allocation rate. This effect is sometimes called an "allocation wall".[54]However, in practice, modern garbage collector algorithms use multiple cores to perform garbage collection, which to some degree alleviates this problem. Some garbage collectors are reported to sustain allocation rates of over a gigabyte per second,[55]and there exist Java-based systems that have no problems scaling to several hundreds of CPU cores and heaps sized several hundreds of GB.[56] Automatic memory management in Java allows for efficient use of lockless and immutable data structures that are extremely hard or sometimes impossible to implement without some kind of a garbage collection.[citation needed]Java offers a number of such high-level structures in its standard library in the java.util.concurrent package, while many languages historically used for high performance systems like C or C++ are still lacking them.[citation needed] Java startup time is often much slower than many languages, includingC,C++,PerlorPython, because many classes (and first of all classes from theplatform Class libraries) must be loaded before being used. When compared against similar popular runtimes, for small programs running on a Windows machine, the startup time appears to be similar toMono'sand a little slower than.NET's.[57] It seems that much of the startup time is due to input-output (IO) bound operations rather than JVM initialization or class loading (thert.jarclass data file alone is 40 MB and the JVM must seek much data in this big file).[27]Some tests showed that although the newsplit bytecode verificationmethod improved class loading by roughly 40%, it only realized about 5% startup improvement for large programs.[58] Albeit a small improvement, it is more visible in small programs that perform a simple operation and then exit, because the Java platform data loading can represent many times the load of the actual program's operation. Starting with Java SE 6 Update 10, the Sun JRE comes with a Quick Starter that preloads class data at OS startup to get data from thedisk cacherather than from the disk. Excelsior JETapproaches the problem from the other side. Its Startup Optimizer reduces the amount of data that must be read from the disk on application startup, and makes the reads more sequential. In November 2004,Nailgun, a "client, protocol, and server for running Java programs from the command line without incurring the JVM startup overhead" was publicly released.[59]introducing for the first time an option forscriptsto use a JVM as adaemon, for running one or more Java applications with no JVM startup overhead. The Nailgun daemon is insecure: "all programs are run with the same permissions as the server". Wheremulti-usersecurity is needed, Nailgun is inappropriate without special precautions. Scripts where per-application JVM startup dominates resource use, see one to twoorder of magnituderuntime performance improvements.[60] Java memory use is much higher than C++'s memory use because: In most cases a C++ application will consume less memory than an equivalent Java application due to the large overhead of Java's virtual machine, class loading and automatic memory resizing. For programs in which memory is a critical factor for choosing between languages and runtime environments, a cost/benefit analysis is needed. Performance of trigonometric functions is bad compared to C, because Java has strict specifications for the results of mathematical operations, which may not correspond to the underlying hardware implementation.[65]On thex87floating point subset, Java since 1.4 does argument reduction for sin and cos in software,[66]causing a big performance hit for values outside the range.[67][clarification needed] TheJava Native Interfaceinvokes a high overhead, making it costly to cross the boundary between code running on the JVM and native code.[68][69][70]Java Native Access(JNA) providesJavaprograms easy access to nativeshared libraries(dynamic-link library(DLLs) on Windows) via Java code only, with no JNI or native code. This functionality is comparable to Windows' Platform/Invoke andPython'sctypes. Access is dynamic at runtime without code generation. But it has a cost, and JNA is usually slower than JNI.[71] Swinghas been perceived as slower than nativewidget toolkits, because it delegates the rendering of widgets to the pureJava 2DAPI. However, benchmarks comparing the performance of Swing versus theStandard Widget Toolkit, which delegates the rendering to the native GUI libraries of the operating system, show no clear winner, and the results greatly depend on the context and the environments.[72]Additionally, the newerJavaFXframework, intended to replace Swing, addresses many of Swing's inherent issues. Some people believe that Java performance forhigh performance computing(HPC) is similar toFortranon compute-intensive benchmarks, but that JVMs still have scalability issues for performing intensive communication on agrid computingnetwork.[73] However, high performance computing applications written in Java have won benchmark competitions. In 2008,[74]and 2009,[75][76]an ApacheHadoop(an open-source high performance computing project written in Java) based cluster was able to sort a terabyte and petabyte of integers the fastest. The hardware setup of the competing systems was not fixed, however.[77][78] Programs in Java start slower than those in other compiled languages.[79][80]Thus, some online judge systems, notably those hosted by Chinese universities, use longer time limits for Java programs[81][82][83][84][85]to be fair to contestants using Java.
https://en.wikipedia.org/wiki/Java_performance
Moving least squaresis a method of reconstructingcontinuous functionsfrom asetof unorganized point samples via the calculation of aweighted least squaresmeasurebiased towards the region around the point at which the reconstructed value is requested. Incomputer graphics, the moving least squares method is useful for reconstructing a surface from a set of points. Often it is used to create a 3D surface from apoint cloudthrough eitherdownsamplingorupsampling. In numerical analysis to handle contributions of geometry where it is difficult to obtain discretizations, the moving least squares methods have also been used and generalized to solvePDEson curved surfaces and other geometries.[1][2][3]This includes numerical methods developed for curved surfaces for solving scalar parabolic PDEs[1][3]and vector-valued hydrodynamic PDEs.[2] In machine learning, moving least squares methods have also been used to develop model classes and learning methods. This includes function regression methods[4]and neural network function and operator regression approaches, such as GMLS-Nets.[5] Consider a functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }and a set of sample pointsS={(xi,fi)|f(xi)=fi}{\displaystyle S=\{(x_{i},f_{i})|f(x_{i})=f_{i}\}}. Then, the moving least square approximation of degreem{\displaystyle m}at the pointx{\displaystyle x}isp~(x){\displaystyle {\tilde {p}}(x)}wherep~{\displaystyle {\tilde {p}}}minimizes the weighted least-square error over all polynomialsp{\displaystyle p}of degreem{\displaystyle m}inRn{\displaystyle \mathbb {R} ^{n}}.θ(s){\displaystyle \theta (s)}is the weight and it tends to zero ass→∞{\displaystyle s\to \infty }. In the exampleθ(s)=e−s2{\displaystyle \theta (s)=e^{-s^{2}}}. The smooth interpolator of "order 3" is a quadratic interpolator.
https://en.wikipedia.org/wiki/Moving_least_squares
Thenothing to hide argumentis alogical fallacywhich states that individuals have no reason to fear or opposesurveillanceprograms unless they are afraid it will uncover their own illicit activities. An individual using this argument may claim that an average person should not worry about government surveillance, as they would have "nothing to hide".[1] An early instance of this argument was referenced byHenry Jamesin his 1888 novel,The Reverberator: If these people had done bad things they ought to be ashamed of themselves and he couldn't pity them, and if they hadn't done them there was no need of making such a rumpus about other people knowing. Upton Sinclairalso referenced a similar argument in his bookThe Profits of Religion, published in 1917 : Not merely was my own mail opened, but the mail of all my relatives and friends — people residing in places as far apart as California and Florida. I recall the bland smile of a government official to whom I complained about this matter: "If you have nothing to hide you have nothing to fear." My answer was that a study of many labor cases had taught me the methods of the agent provocateur. He is quite willing to take real evidence if he can find it; but if not, he has familiarized himself with the affairs of his victim, and can make evidence which will be convincing when exploited by the yellow press.[2] The motto "If you've got nothing to hide, you've got nothing to fear" has been used in defense of theclosed-circuit televisionprogram practiced in theUnited Kingdom.[3] This argument is commonly used in discussions regardingprivacy. Legal scholarGeoffrey Stonesaid that the use of the argument is "all-too-common".[3]Bruce Schneier, a data security expert andcryptographer, described it as the "most common retort against privacy advocates."[3]Colin J. Bennett, author ofThe Privacy Advocates, said that an advocate of privacy often "has to constantly refute" the argument.[4]Bennett explained that most people "go through their daily lives believing that surveillance processes are not directed at them, but at the miscreants and wrongdoers" and that "the dominant orientation is that mechanisms of surveillance are directed at others" despite "evidence that the monitoring of individual behavior has become routine and everyday". An ethnographic study by Ana Viseu, Andrew Clement, and Jane Aspinal revealed that individuals with higher socioeconomic status were not as concerned by surveillance as their counterparts.[5]In another study regarding privacy-enhancing technology,[6]Viseuet al.,noticed a compliancy regarding user privacy. Both studies attributed this attitude to the nothing to hide argument. A qualitative study conducted for thegovernment of the United Kingdomaround 2003[7]found that self-employed men initially used the "nothing to hide" argument before shifting to an argument in which they perceived surveillance to be a nuisance instead of a threat.[8] Viseuet al.,said that the argument "has been well documented in the privacy literature as a stumbling block to the development of pragmaticprivacy protectionstrategies, and it, too, is related to the ambiguous and symbolic nature of the term ‘privacy' itself."[6]They explained that privacy is an abstract concept and people only become concerned with it once their privacy is gone. Furthermore, they compare a loss to privacy with people knowing thatozone depletionandglobal warmingare negative developments, but that "the immediate gains of driving the car to work or putting on hairspray outweigh the often invisible losses ofpollutingthe environment." Whistleblower and anti-surveillance advocateEdward Snowdenremarked that "Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care aboutfree speechbecause you have nothing to say."[9]From his perspective, governments are obligated to protect citizens' right to privacy, and people who argue in favor of thenothing to hideargument are too willing to accept government infringement upon those rights. Daniel J. Solovestated in an article forThe Chronicle of Higher Educationthat he opposes the argument. He was concerned that without privacy rights, governments could do damage to citizens by leaking sensitive information, or use information about a person to deny access to services, even if that person has not actually committed any crimes. Solove also wrote that a government can cause damage to an individual's personal life by making errors:[3]"When engaged directly, the nothing-to-hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by governmentdata collectionand use beyond surveillance and disclosure, the nothing-to-hide argument, in the end, has nothing to say." Adam D. Moore, author ofPrivacy Rights: Moral and Legal Foundations, argued that "it is the view that rights are resistant tocost/benefitorconsequentialistsort of arguments. Here we are rejecting the view that privacy interests are the sorts of things that can be traded for security."[10]He also stated that surveillance can disproportionately affect certain groups in society based on appearance, ethnicity, sexuality, and religion. Cryptographer and computer security expertBruce Schneierexpressed opposition to thenothing to hideargument, citing a statement widely attributed toCardinal Richelieu:[11]"Give me six lines written by the hand of the most honest man, I'll find enough to hang him." This metaphor is meant to illustrate that with even a small amount of information about an individual, an entity such as a government can find a way to prosecute orblackmailthem.[12]Schneier also argued that the actual choice is between "liberty versus control", rather than "security versus privacy".[12] Philosopher and psychoanalyst Emilio Mordini argued that the "nothing to hide" argument is inherentlyparadoxical, because people do not need to have "something to hide" in order to be hiding "something". Mordini makes the point that the content of what is hidden is not necessarily relevant; instead, he argues that it is necessary to have an intimate area which can be both hidden and access-restricted, because–from a psychological perspective–people become individuals when they discover that it is possible to hide something from others.[13] Julian Assange, founder ofWikileaks, agreed withJacob Appelbaumand remarked that "Mass surveillance is a mass structural change. When society goes bad, it's going to take you with it, even if you are the blandest person on earth."[14] Law professorIgnacio Cofoneargued that the argument is mistaken in its own terms because whenever people disclose relevant information to others, they also must disclose irrelevant information, and this irrelevant information has privacy costs and can lead to discrimination or other harmful effects.[15][16] Alex Winter, director of the documentaryDeep Web: The Untold Story of Bitcoin and the Silk Road, stated in his 2015 TED Talk "I don't accept the idea that if we have nothing to hide we have nothing to fear. Privacy serves a purpose. It’s why we have blinds on our windows and a door on our bathroom."[17]
https://en.wikipedia.org/wiki/Nothing_to_hide_argument
TheSchmidt-Samoa cryptosystemis an asymmetriccryptographictechnique, whose security, likeRabindepends on the difficulty of integerfactorization. Unlike Rabin this algorithm does not produce an ambiguity in the decryption at a cost of encryption speed. NowNis the public key anddis the private key. To encrypt a messagemwe compute the ciphertext asc=mNmodN.{\displaystyle c=m^{N}\mod N.} To decrypt a ciphertextcwe compute the plaintext asm=cdmodpq,{\displaystyle m=c^{d}\mod pq,}which like for Rabin andRSAcan be computed with theChinese remainder theorem. Example: Now to verify: The algorithm, like Rabin, is based on the difficulty of factoring the modulusN, which is a distinct advantage over RSA. That is, it can be shown that if there exists an algorithm that can decrypt arbitrary messages, then this algorithm can be used to factorN. The algorithm processes decryption as fast as Rabin and RSA, however it has much slower encryption since the sender must compute a full exponentiation. Since encryption uses a fixed known exponent anaddition chainmay be used to optimize the encryption process. The cost of producing an optimal addition chain can be amortized over the life of the public key, that is, it need only be computed once and cached.
https://en.wikipedia.org/wiki/Schmidt%E2%80%93Samoa_cryptosystem
Role and reference grammar(RRG) is a model ofgrammardeveloped byWilliam A. FoleyandRobert Van Valin, Jr.in the 1980s, which incorporates many of the points of view of currentfunctional grammartheories. In RRG, the description of a sentence in a particular language is formulated in terms of (a) its logical (semantic) structure and communicative functions, and (b) the grammatical procedures that are available in the language for the expression of these meanings. Among the main features of RRG are the use oflexical decomposition, based upon the predicate semantics ofDavid Dowty(1979), an analysis ofclause structure, and the use of a set ofthematic rolesorganized into a hierarchy in which the highest-ranking roles are 'Actor' (for the most active participant) and 'Undergoer'. RRG's practical approach to language is demonstrated in the multilingualNatural Language Understanding(NLU) system of cognitive scientistJohn Ball. In 2012, Ball integrated his Patom Theory with Role and Reference Grammar, producing a language independent NLU breaking down language by meaning. Thissyntax-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Role_and_reference_grammar
Inapplied mathematics,discretizationis the process of transferringcontinuousfunctions, models, variables, and equations intodiscretecounterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers.Dichotomizationis the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as abinary variable(creating adichotomyformodelingpurposes, as inbinary classification). Discretization is also related todiscrete mathematics, and is an important component ofgranular computing. In this context,discretizationmay also refer to modification of variable or categorygranularity, as when multiple discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data isdiscretized, there is always some amount ofdiscretization error. The goal is to reduce the amount to a level considerednegligiblefor themodelingpurposes at hand. The termsdiscretizationandquantizationoften have the samedenotationbut not always identicalconnotations. (Specifically, the two terms share asemantic field.) The same is true ofdiscretization errorandquantization error. Mathematical methods relating to discretization include theEuler–Maruyama methodand thezero-order hold. Discretization is also concerned with the transformation of continuousdifferential equationsinto discretedifference equations, suitable fornumerical computing. The following continuous-timestate space model x˙(t)=Ax(t)+Bu(t)+w(t)y(t)=Cx(t)+Du(t)+v(t){\displaystyle {\begin{aligned}{\dot {\mathbf {x} }}(t)&=\mathbf {Ax} (t)+\mathbf {Bu} (t)+\mathbf {w} (t)\\[2pt]\mathbf {y} (t)&=\mathbf {Cx} (t)+\mathbf {Du} (t)+\mathbf {v} (t)\end{aligned}}} wherevandware continuous zero-meanwhite noisesources withpower spectral densities w(t)∼N(0,Q)v(t)∼N(0,R){\displaystyle {\begin{aligned}\mathbf {w} (t)&\sim N(0,\mathbf {Q} )\\[2pt]\mathbf {v} (t)&\sim N(0,\mathbf {R} )\end{aligned}}} can be discretized, assumingzero-order holdfor the inputuand continuous integration for the noisev, to x[k+1]=Adx[k]+Bdu[k]+w[k]y[k]=Cdx[k]+Ddu[k]+v[k]{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=\mathbf {A_{d}x} [k]+\mathbf {B_{d}u} [k]+\mathbf {w} [k]\\[2pt]\mathbf {y} [k]&=\mathbf {C_{d}x} [k]+\mathbf {D_{d}u} [k]+\mathbf {v} [k]\end{aligned}}} with covariances w[k]∼N(0,Qd)v[k]∼N(0,Rd){\displaystyle {\begin{aligned}\mathbf {w} [k]&\sim N(0,\mathbf {Q_{d}} )\\[2pt]\mathbf {v} [k]&\sim N(0,\mathbf {R_{d}} )\end{aligned}}} where Ad=eAT=L−1{(sI−A)−1}t=TBd=(∫τ=0TeAτdτ)BCd=CDd=DQd=∫τ=0TeAτQeA⊤τdτRd=R1T{\displaystyle {\begin{aligned}\mathbf {A_{d}} &=e^{\mathbf {A} T}={\mathcal {L}}^{-1}{\Bigl \{}(s\mathbf {I} -\mathbf {A} )^{-1}{\Bigr \}}_{t=T}\\[4pt]\mathbf {B_{d}} &=\left(\int _{\tau =0}^{T}e^{\mathbf {A} \tau }d\tau \right)\mathbf {B} \\[4pt]\mathbf {C_{d}} &=\mathbf {C} \\[8pt]\mathbf {D_{d}} &=\mathbf {D} \\[2pt]\mathbf {Q_{d}} &=\int _{\tau =0}^{T}e^{\mathbf {A} \tau }\mathbf {Q} e^{\mathbf {A} ^{\top }\tau }d\tau \\[2pt]\mathbf {R_{d}} &=\mathbf {R} {\frac {1}{T}}\end{aligned}}} andTis thesample time. IfAisnonsingular,Bd=A−1(Ad−I)B.{\displaystyle \mathbf {B_{d}} =\mathbf {A} ^{-1}(\mathbf {A_{d}} -\mathbf {I} )\mathbf {B} .} The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density.[1] A clever trick to computeAdandBdin one step is by utilizing the following property:[2]: p. 215 e[AB00]T=[AdBd0I]{\displaystyle e^{{\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {0} &\mathbf {0} \end{bmatrix}}T}={\begin{bmatrix}\mathbf {A_{d}} &\mathbf {B_{d}} \\\mathbf {0} &\mathbf {I} \end{bmatrix}}} WhereAdandBdare the discretized state-space matrices. Numerical evaluation ofQdis a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it[3]F=[−AQ0A⊤]TG=eF=[…Ad−1Qd0Ad⊤]{\displaystyle {\begin{aligned}\mathbf {F} &={\begin{bmatrix}-\mathbf {A} &\mathbf {Q} \\\mathbf {0} &\mathbf {A} ^{\top }\end{bmatrix}}T\\[2pt]\mathbf {G} &=e^{\mathbf {F} }={\begin{bmatrix}\dots &\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} \\\mathbf {0} &\mathbf {A_{d}} ^{\top }\end{bmatrix}}\end{aligned}}}The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition ofGwith the upper-right partition ofG:Qd=(Ad⊤)⊤(Ad−1Qd)=Ad(Ad−1Qd).{\displaystyle \mathbf {Q_{d}} =(\mathbf {A_{d}} ^{\top })^{\top }(\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} )=\mathbf {A_{d}} (\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} ).} Starting with the continuous modelx˙(t)=Ax(t)+Bu(t){\displaystyle \mathbf {\dot {x}} (t)=\mathbf {Ax} (t)+\mathbf {Bu} (t)}we know that thematrix exponentialisddteAt=AeAt=eAtA{\displaystyle {\frac {d}{dt}}e^{\mathbf {A} t}=\mathbf {A} e^{\mathbf {A} t}=e^{\mathbf {A} t}\mathbf {A} }and by premultiplying the model we gete−Atx˙(t)=e−AtAx(t)+e−AtBu(t){\displaystyle e^{-\mathbf {A} t}\mathbf {\dot {x}} (t)=e^{-\mathbf {A} t}\mathbf {Ax} (t)+e^{-\mathbf {A} t}\mathbf {Bu} (t)}which we recognize asddt[e−Atx(t)]=e−AtBu(t){\displaystyle {\frac {d}{dt}}{\Bigl [}e^{-\mathbf {A} t}\mathbf {x} (t){\Bigr ]}=e^{-\mathbf {A} t}\mathbf {Bu} (t)}and by integrating,e−Atx(t)−e0x(0)=∫0te−AτBu(τ)dτx(t)=eAtx(0)+∫0teA(t−τ)Bu(τ)dτ{\displaystyle {\begin{aligned}e^{-\mathbf {A} t}\mathbf {x} (t)-e^{0}\mathbf {x} (0)&=\int _{0}^{t}e^{-\mathbf {A} \tau }\mathbf {Bu} (\tau )d\tau \\[2pt]\mathbf {x} (t)&=e^{\mathbf {A} t}\mathbf {x} (0)+\int _{0}^{t}e^{\mathbf {A} (t-\tau )}\mathbf {Bu} (\tau )d\tau \end{aligned}}}which is an analytical solution to the continuous model. Now we want to discretise the above expression. We assume thatuisconstantduring each timestep.x[k]=defx(kT)x[k]=eAkTx(0)+∫0kTeA(kT−τ)Bu(τ)dτx[k+1]=eA(k+1)Tx(0)+∫0(k+1)TeA[(k+1)T−τ]Bu(τ)dτx[k+1]=eAT[eAkTx(0)+∫0kTeA(kT−τ)Bu(τ)dτ]+∫kT(k+1)TeA(kT+T−τ)Bu(τ)dτ{\displaystyle {\begin{aligned}\mathbf {x} [k]&\,{\stackrel {\mathrm {def} }{=}}\ \mathbf {x} (kT)\\[6pt]\mathbf {x} [k]&=e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {Bu} (\tau )d\tau \\[4pt]\mathbf {x} [k+1]&=e^{\mathbf {A} (k+1)T}\mathbf {x} (0)+\int _{0}^{(k+1)T}e^{\mathbf {A} [(k+1)T-\tau ]}\mathbf {Bu} (\tau )d\tau \\[2pt]\mathbf {x} [k+1]&=e^{\mathbf {A} T}\left[e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {Bu} (\tau )d\tau \right]+\int _{kT}^{(k+1)T}e^{\mathbf {A} (kT+T-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau \end{aligned}}}We recognize the bracketed expression asx[k]{\displaystyle \mathbf {x} [k]}, and the second term can be simplified by substituting with the functionv(τ)=kT+T−τ{\displaystyle v(\tau )=kT+T-\tau }. Note thatdτ=−dv{\displaystyle d\tau =-dv}. We also assume thatuis constant during theintegral, which in turn yields x[k+1]=eATx[k]−(∫v(kT)v((k+1)T)eAvdv)Bu[k]=eATx[k]−(∫T0eAvdv)Bu[k]=eATx[k]+(∫0TeAvdv)Bu[k]=eATx[k]+A−1(eAT−I)Bu[k]{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{v(kT)}^{v((k+1)T)}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{T}^{0}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=e^{\mathbf {A} T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[4pt]&=e^{\mathbf {A} T}\mathbf {x} [k]+\mathbf {A} ^{-1}\left(e^{\mathbf {A} T}-\mathbf {I} \right)\mathbf {Bu} [k]\end{aligned}}} which is an exact solution to the discretization problem. WhenAis singular, the latter expression can still be used by replacingeAT{\displaystyle e^{\mathbf {A} T}}by itsTaylor expansion,eAT=∑k=0∞1k!(AT)k.{\displaystyle e^{\mathbf {A} T}=\sum _{k=0}^{\infty }{\frac {1}{k!}}(\mathbf {A} T)^{k}.}This yieldsx[k+1]=eATx[k]+(∫0TeAvdv)Bu[k]=(∑k=0∞1k!(AT)k)x[k]+(∑k=1∞1k!Ak−1Tk)Bu[k],{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=e^{\mathbf {A} T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=\left(\sum _{k=0}^{\infty }{\frac {1}{k!}}(\mathbf {A} T)^{k}\right)\mathbf {x} [k]+\left(\sum _{k=1}^{\infty }{\frac {1}{k!}}\mathbf {A} ^{k-1}T^{k}\right)\mathbf {Bu} [k],\end{aligned}}}which is the form used in practice. Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timestepseAT≈I+AT{\displaystyle e^{\mathbf {A} T}\approx \mathbf {I} +\mathbf {A} T}. The approximate solution then becomes:x[k+1]≈(I+AT)x[k]+TBu[k]{\displaystyle \mathbf {x} [k+1]\approx (\mathbf {I} +\mathbf {A} T)\mathbf {x} [k]+T\mathbf {Bu} [k]} This is also known as theEuler method, which is also known as the forward Euler method. Other possible approximations areeAT≈(I−AT)−1{\displaystyle e^{\mathbf {A} T}\approx (\mathbf {I} -\mathbf {A} T)^{-1}}, otherwise known as the backward Euler method andeAT≈(I+12AT)(I−12AT)−1{\displaystyle e^{\mathbf {A} T}\approx (\mathbf {I} +{\tfrac {1}{2}}\mathbf {A} T)(\mathbf {I} -{\tfrac {1}{2}}\mathbf {A} T)^{-1}}, which is known as thebilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system. Instatisticsand machine learning,discretizationrefers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions. Ingeneralized functionstheory,discretizationarises as a particular case of theConvolution Theoremontempered distributions whereIII{\displaystyle \operatorname {III} }is theDirac comb,⋅III{\displaystyle \cdot \operatorname {III} }is discretization,∗III{\displaystyle *\operatorname {III} }isperiodization,f{\displaystyle f}is a rapidly decreasing tempered distribution (e.g. aDirac delta functionδ{\displaystyle \delta }or any othercompactly supportedfunction),α{\displaystyle \alpha }is asmooth,slowly growingordinary function(e.g. the function that is constantly1{\displaystyle 1}or any otherband-limitedfunction) andF{\displaystyle {\mathcal {F}}}is the (unitary, ordinary frequency)Fourier transform. Functionsα{\displaystyle \alpha }which are not smooth can be made smooth using amollifierprior to discretization. As an example, discretization of the function that is constantly1{\displaystyle 1}yields thesequence[..,1,1,1,..]{\displaystyle [..,1,1,1,..]}which, interpreted as the coefficients of alinear combinationofDirac delta functions, forms aDirac comb. If additionallytruncationis applied, one obtains finite sequences, e.g.[1,1,1,1]{\displaystyle [1,1,1,1]}. They are discrete in both, time and frequency.
https://en.wikipedia.org/wiki/Discretization
Transparency of media ownershiprefers to the public availability of accurate, comprehensive and up-to-date information about media ownership structures to make possible for media authority and the wider public to ascertain who effectively owns and controls the media. Between 2011 and 2012, following some concerns on opaque activities which accompanied the process ofprivatisation of the media in Croatia, the government initiated the reform of the law on transparency of media ownership with the aim to avoid the concealment of information on media ownership structure. The Croatian law provides the disclosure of information that are sufficient to establish who formally holds shares in themedia organisations operating in Croatia. However, in practice, some obstacles have been observed.[1]There are also some unclear aspects in the new legal framework which is the result of uncoordinated legal developments needed to complement the original Media Act with provisions to be applied to electronic media, which emerged several years later and that are now covered by a dedicated law, namely the Electronic Media Law.[2]In general terms, the information disclosed can be accessed by the public at large, but, as a matter of fact, this is quite uncommon. In Croatia, the public debate on media ownership transparency developed only recently, in connection with the amendments to the Media Act and Electronic Media Act.[3] Transparency of media ownership refers to the public availability of accurate, comprehensive and up-to-date information about media ownership structures. It is an essential component of any democratic media system and its crucial formedia pluralismand democracy.[4]A legal regime guaranteeing transparency of media ownership makes available all the information needed for finding out who effectively owns, controls and influences the media as well as media influence on political parties or state bodies. The importance of transparency of media ownership for any democratic and pluralist society has been broadly recognised by theEuropean Parliament, theEuropean Commission's High-Level Group on Media Freedom and Pluralism[5]and theCouncil of Europe. To ensure that the public knows who effectively owns and influences the media, national legal frameworks should ensure the disclosure of at least the following basic information: Importantly, to understand who really owns and controls a specific media outlet it is necessary to check who is beyond the officialshareholdingsand scrutinise indirect, controlling andbeneficial ownershipwhich refers on shares of a media company hold on behalf of another person.[6]To be meaningful and easily accessible by the citizens and national media authorities, this information should be updated, searchable, free and reusable.[7] In 2011, concerns about opaque media financial flows led to changes to the laws regulating media ownership transparency to make possible the identification of media owners beyond a company back to individuals. Indeed, the relevant laws have been updated to ensure better ownership transparency of media publishers, in particular to avoid the concealment of the real media ownership structure.[8]The process of privatisation of the print media in Croatia, which began in 2000, had been accompanied by corruption scandals and rumours ofmoney launderingand the involvement of criminal groups in the sales of the biggest Croatian newspapers, i.e.Jutarnji list,Večerni ListandSlobodna Dalmacija.[9]The names of the real owners of these media was hidden behind secret contracts and informal agreements involving politicians, police and other high-profile individuals.[10]Changes to the Media Act and the Electronic Media Act were very quick and passed with limited consultation by the Parliament in July 2011 for the Media Act and in June 2012 for the Electronic Media Act.[11] Transparency of media ownership in Croatia is regulated by the Media Law (2011) and the Electronic Media Law (2012). These rules apply to all the media sectors, i.e. print, broadcast and online media which are required to regularly provide and update information on their shareholders. These laws contain provisions for disclosing ownership information to: the relevant media authorities, i.e. the Electronic Media Council in the case on online media (Electronic Media Act); to independent professional and business organisations such as the Croatian Chamber of Economy and other corporate and trade registers (Media Act); or directly to the public.[12]In general terms, the Croatian legal framework requires media to reveal enough information to make possible the identification of their owners, be it an individual or a company. This includes data on allshareholdingsover 1%, disclosure ofbeneficial ownershipand of people with indirect interests and control as well as a prohibition on "secret" ownership.[13]The information disclosed are provided directly to the public via the web and Official Gazette. Upon request, the Croatian Chamber of Economy, which is responsible for collecting ownership data, must guaranteepublic access to the informationsubmitted to it. This is in compliance with the recognisedright of access to informationunder the Act on the Right to Access Information.[14] The amendments have had little impact on the ownership structure of Croatian media, which remain questionable. This is because most of the privatisation, which in many cases was reported to be suspicious or controversial, were completed before the amended laws entered into force.[15] Despite the good provisions enshrined in the reformed legal framework which built upon existing obligations, the lack of consultation that led to the amendments to the Media Act and the Electronic Media Act resulted in a series of shortcomings especially with regard to monitoring and enforcement of the laws. For instance, in practice, media companies in Croatia do not always comply with their obligation to publish information on indirect ownership and the law doesn't foresee a mechanism for monitoring, checking compliance and apply sanctions. Indeed, the Media act does not provide for effective mechanism enabling the Croatian Chamber of Economy to check that the information received are updated and correct. Given the scarcity of resources assigned to it, the Chamber has to rely on the assistance of other authorities such as the Croatian Competition Agency and the Company Register.[16]Another critical point of the Croatian system regulating transparency of media ownership is that it does not guarantee the full disclosure of information on individuals holding shares of a media company.[17]Indeed, often the name of a company is not enough for providing information on the individuals behind it. For example, on the basis of the information provided under the law regulating transparency of media ownership the media companiesEuropa digital d.o.o.,Slobodna Dalmacija,EPH MediaandGloria Groupaapparently don't have anything in common, but they are all subsidiary companies within theEPH group, owned byWAZand the businessmanNinoslav Pavić.[18]Additionally, in practice, media outlets do not always disclose information on indirect ownership as required by the law. For example, the only shareholder listed for the media companyVecernji list d.d., which issues the daily paperVecernji listis Styria Media International AG from Graz. Other relevant ownership information are not disclosed, for instance information regarding the important shareholders of Styra Media International AG or whether that company holds some shares for another person or a company.[19]Furthermore, the publication of data in the Official Gazette is not monitored and since there is not a special issue of the Official Gazette listing the updates occurred during the year the search is complex and time-consuming.[20] In sum, despite the fact that the laws on transparency of media ownership are well defined, in practice is quite difficult to assess the actual ownership structures and reconstruct the networks of ownership and connected persons.[21]The problem is exacerbated by the fact that, according to the Croatian law, different agencies are in charge of collecting data on ownership – the Council for Electronic media is responsible for online media and the Croatian Chamber of Commerce for print media – without a unique centralised monitoring system working across print, radio, television and online media.[22] In October 2014, the European Commission organised a consultative conference on transparency of media ownership and Croatia was mentioned as "a good practice" in this context. However, many panellists participating at the conference highlighted that it is important to insist on the improvement of the legal framework but that is not enough as, according to experts, it is more important to know who effectively controls the owners than who nominally owns the media.[23]
https://en.wikipedia.org/wiki/Transparency_of_media_ownership_in_Croatia
Incombinatoricsandorder theory, amultitreemay describe either of two equivalent structures: adirected acyclic graph(DAG) in which there is at most one directed path between any twovertices, or equivalently in which thesubgraphreachable from any vertex induces anundirected tree, or apartially ordered set(poset) that does not have four itemsa,b,c, anddforming a diamond suborder witha≤b≤danda≤c≤dbut withbandcincomparable to each other (also called adiamond-free poset[1]). Incomputational complexity theory, multitrees have also been calledstrongly unambiguous graphsormangroves; they can be used to modelnondeterministic algorithmsin which there is at most one computational path connecting any two states.[2] Multitrees may be used to represent multiple overlappingtaxonomiesover the same ground set.[3]If afamily treemay contain multiple marriages from one family to another, but does not contain marriages between any two blood relatives, then it forms a multitree.[4] In a directed acyclic graph, if there is at most one directed path between any two vertices, or equivalently if the subgraph reachable from any vertex induces an undirected tree, then itsreachabilityrelation is a diamond-free partial order. Conversely, in a diamond-free partial order, thetransitive reductionidentifies a directed acyclic graph in which the subgraph reachable from any vertex induces an undirected tree. A diamond-freefamily of setsis a familyFof sets whose inclusion ordering forms a diamond-free poset. IfD(n) denotes the largest possible diamond-free family of subsets of ann-element set, then it is known that and it is conjectured that the limit is 2.[1] Apolytree, a directed acyclic graph formed byorientingthe edges of an undirected tree, is a special case of a multitree. The subgraph reachable from any vertex in a multitree is anarborescencerooted in the vertex, that is a polytree in which all edges are oriented away from the root. The word "multitree" has also been used to refer to aseries–parallel partial order,[5]or to other structures formed by combining multiple trees.
https://en.wikipedia.org/wiki/Multitree
Customer engagementis an interaction between an external consumer/customer (eitherB2CorB2B) and an organization (company orbrand) through various online or offline channels.[citation needed]According to Hollebeek, Srivastava and Chen, customer engagement is "a customer’s motivationally driven, volitional investment of operant resources (including cognitive, emotional, behavioral, and social knowledge and skills), and operand resources (e.g., equipment) into brand interactions," which applies to online and offline engagement.[1] Online customer engagement is qualitatively different from offline engagement as the nature of the customer's interactions with a brand, company and other customers differ on the internet. Discussion forums orblogs, for example, are spaces where people can communicate and socialize in ways that cannot be replicated by any offline interactive medium. Online customer engagement is a social phenomenon that became mainstream with the wide adoption of the internet in the late 1990s, which has expanded the technical developments in broadband speed, connectivity and social media. These factors enable customers to regularly engage in online communities revolving, directly or indirectly, around product categories and other consumption topics. This process often leads to positive engagement with the company or offering, as well as the behaviors associated with different degrees of customer engagement.[citation needed] Marketingpractices aim to create, stimulate or influence customer behaviour, which placesconversionsinto a more strategic context and is premised on the understanding that a focus on maximising conversions can, in some circumstances, decrease the likelihood of repeat conversions.[2]Although customer advocacy has always been a goal for marketers, the rise of onlineuser-generated contenthas directly influenced levels of advocacy. Customer engagement targets long-term interactions, encouraging customer loyalty and advocacy through word-of-mouth. Although customer engagement marketing is consistent both online and offline, the internet is the basis for marketing efforts.[2] In March 2006, theAdvertising Research Foundationannounced the first definition of customer engagement[3]as "turning on a prospect to a brand idea enhanced by the surrounding context." However, the ARF definition was criticized by some for being too broad.[4]The ARF,World Federation of Advertisers,[5]Various definitions have translated different aspects of customer engagement. Forrester Consulting's research in 2008, has defined customer engagement as "creating deep connections with customers that drive purchase decisions, interaction, and participation, over time". Studies by the Economist Intelligence Unit result in defining customer engagement as, "an intimate long-term relationship with the customer". Both of these concepts prescribe that customer engagement is attributed to a rich association formed with customers. With aspects of relationship marketing and service-dominant perspectives, customer engagement can be loosely defined as "consumers' proactive contributions in co-creating their personalized experiences and perceived value with organizations through active, explicit, and ongoing dialogue and interactions". The book,Best Digital Marketing Campaigns In The World, defines customer engagement as, "mutually beneficial relationships with a constantly growing community of online consumers". The various definitions of customer engagement are diversified by different perspectives and contexts of the engagement process. These are determined by the brand, product, or service, the audience profile, attitudes and behaviours, and messages and channels of communication that are used to interact with the customer. Since 2009, a number of new definitions have been proposed in the literature. In 2011, the term was defined as "the level of a customer’s cognitive, emotional and behavioral investment in specific brand interactions," and identifies the three CE dimensions of immersion (cognitive), passion (emotional) and activation (behavioral).[6]It was also defined as "a psychological state that occurs by virtue of interactive, co-creative customer experiences with a particular agent/object (e.g. a brand)".[7]Researchers have based their work on customer engagement as a multi-dimensional construct, while also identifying that it is context-dependent. Engagement gets manifested in the various interactions that customers undertake, which in turn get shaped up by individual cultures.[8]The context is not limited to geographical context, but also includes the medium with which the user engages.[8]Moreover, customer engagement is the emotional involvement and psychological process in which both new and existing consumers become loyal to specific types of services or products. The degree to which customers pay attention to companies or products, as well as their participation in operations, is referred to as customer engagement.[9] To effectively navigate customer engagement, businesses establish objectives that align with their organizational goals. Whether the aim is to enhance customer loyalty, drive revenue growth, or deliver personalized experiences, having a plan serves impactful engagement initiative. To optimize outcomes, businesses analyze customer interactions, identify areas for improvement, and iterate their strategies. The landscape of customer engagement is characterized by merging data-driven insights, innovative strategies, and a commitment to delivering outstanding customer experiences. By prioritizing customer engagement, businesses can cultivate long-lasting customer relationships, drive customer loyalty, and thrive in increasingly competitive markets.[citation needed] Efforts to boost user engagement at any expense can lead to social media addiction for both service providers and users. Facebook and several other social media platforms have faced criticism for manipulating user emotions to enhance engagement, even if it is knowingly false content. Professor Hany Farid summarized Facebook’s approach, stating, “When you’re in the business of maximizing engagement, you’re not interested in truth."[10]Various other techniques used to increase engagement are also considered abusive. For example, FOMO (Fear of Missing Out),infinite scrolling, and incentives for users who frequently engage with the service. Offline customer engagement predates online, but the latter is a qualitatively different social phenomenon, unlike any offline customer engagement that social theorists or marketers recognize. In the past, customer engagement has been generated irresolutely through television, radio, media, outdoor advertising, and various othertouchpointsideally during peak and/or high trafficked allocations. However, the only conclusive results of campaigns were sales and/or return on investment figures. The widespread adoption of the internet during the late 1990s has enhanced the processes of customer engagement, in particular, the way in which it can now be measured in different ways on different levels of engagement. It is a recentsocial phenomenonwhere people engage online in communities that do not necessarily revolve around a particular product but serve as meeting or networking places. This online engagement has brought about both the empowerment of consumers and the opportunity for businesses to engage with their target customers online. A 2011 market analysis revealed that 80% of online customers, after reading negative online reviews, report making alternate purchasing decisions, while 87% of consumers said a favorable review has confirmed their decision to go through with a purchase.[11] The concept and practice of online customer engagement enables organisations to respond to the fundamental changes in customer behaviour that the internet has brought about,[12]as well as to the increasing ineffectiveness of the traditional 'interrupt and repeat', broadcast model of advertising. Due to the fragmentation and specialisation of media and audiences, as well as the proliferation of community- anduser-generated content, businesses are increasingly losing the power to dictate the communications agenda. Simultaneously, lowerswitching costs, the geographical widening of the market and the vast choice of content, services and products available online have weakened customer loyalty. Enhancing customers' firm- and market-related expertise has been shown to engage customers,[13]strengthen their loyalty,[14]and emotionally tie them more closely to a firm.[15] Since the world has reached a population of over 3 billion internet users, it is conclusive that society's interactive culture is significantly influenced by technology. Connectivity is bringing consumers and organizations together, which makes it critical for companies to take advantage and focus on capturing the attention of and interacting with well-informed consumers in order to serve and satisfy them. Connecting with customers establishes exclusivity in their experience, which potentially will increase brand loyalty, and word of mouth, and provides businesses with valuable consumer analytics, insight, and retention. Customer engagement can come in the form of a view, an impression, a reach, a click, a comment, or a share, among many others. These are ways in which analytics and insights into customer engagement can now be measured on different levels, all of which are information that allows businesses to record and process results of customer engagement. Taking into consideration the widespread information and connections for consumers, the way to develop penetrable customer engagement is to proactively connect with customers by listening. Listening will empower the consumer, give them control, and endorse a customer-centric two-way dialogue. This dialogue will redefine the role of the consumer as they no longer assume the end user role in the process. Instead of the traditional transaction and/or exchange, the concept becomes a process of partnership between organizations and consumers. Particularly since the internet has provided consumers with the accumulation of much diverse knowledge and understanding, consumers now have increasingly high expectations, developed stronger sensory perceptions, and hence have become more attracted to experiential values. Therefore, it would only be profitable for businesses to submit to the new criteria, to provide the opportunity for consumers to further immerse in the consumption experience. This experience will involve organizations and consumers sharing and exchanging information, which will generate increased awareness, interest, desire to purchase, retention, and loyalty among consumers, evolving an intimate relationship. Significantly, total openness and strengthening customer service is the selling point here for customers, to make them feel more involved rather than just a number. This will earn trust, engagement, and ultimately word of mouth through endless social circles. Essentially, it is a more dynamic and transparent concept ofcustomer relationship management (CRM). The utilization of social media platforms has emerged as a modern way of improving customer engagement strategies. By curating content that resonates with the interests of customers, businesses cultivate authentic connections and communities online. Platforms such as Instagram and Twitter serve as useful tools for meaning dialog, enabling businesses to make lasting relationships with customers and amplify brand visibility online. Customer engagement on Twitter is a form ofsocial powerand is usually measured with likes, replies and retweets. A recent study[16]shows that retweets are more likely to contain positive content and address larger audiences using the first-personpronoun"we". Replies, on the other hand, are more likely to contain negative content and address individuals using the second-personpronoun"you" and the third-personpronouns"he" or "she". While users with less followers tend to engage ininterpersonalconversations to provoke customer engagement, influencers with many followers tend to post positive messages, often using the word "love" when addressing larger audiences. Customer engagement marketing is necessitated by a combination of social, technological and market developments. Companies attempt to create an engaging dialogue with target consumers and stimulate their engagement with the given brand. Although this must take place both on and off-line, the internet is considered the primary method. Marketing begins with understanding the internal dynamics of these developments and the behaviour and engagement of consumers online.Consumer-generated mediaplays a significant role in the understanding and modeling of engagement.[17]The control Web 2.0 consumers have gained is quantified through 'old school' marketing performance metrics.[18] The effectiveness of the traditional 'interrupt and repeat' model of advertising is decreasing, which has caused businesses to lose control of communications agendas.[19][20][21]In August 2006, McKinsey & Co published a report[22]which indicated that traditional TV advertising would decrease in effectiveness compared to previous decades.[19]As customer audiences have become smaller and more specialised, the fragmentation of media, audiences and the accompanying reduction of audience size[19]have reduced the effectiveness of the traditional top-down, mass, 'interrupt and repeat' advertising model. A Forrester Research's North American Consumer Technology Adoption Study[22]found that people in the 18-26 age group spend more time online than watching TV.[2][19]Furthermore, the Global Web Index reported that in 2021, YouTube beats any mainstream media platforms when it comes to monthly engagement.[citation needed]This is partly due to the fact that 51% of U.S. and U.K. consumers use YouTube for shopping and product research,[citation needed]a service that traditional media can't really provide.[citation needed] In response to the fragmentation and increased amount of time spent online, marketers have also increased spending in online communication. ContextWeb analysts found marketers who promote on sites like Facebook and New York Times are not as successful at reaching consumers while marketers who promote more on niche websites have a better chance of reaching their audiences.[23]Customer audiences are also broadcasters with the power for circulation and permanence of CGM, businesses lose influence. Rather than trying to position a product using static messages, companies can become the subject of conversation amongst atarget marketthat has already discussed, positioned and rated the product. This also means that consumers can now choose not only when and how but, also, if they will engage with marketing communications.[2]In addition, new media provides consumers with more control over advertising consumption.[24] Research shows the importance of customer engagement in the modern market. The lowering of entry barriers, such as the need for a sales force, access to channels and physical assets, and the geographical widening of the market due to the internet have brought about increasing competition and a decrease in brand loyalty. In combination with lower switching costs, easier access to information about products and suppliers and increased choice,brand loyaltyis hard to achieve. The increasing ineffectiveness of television advertising is due to the shift of consumer attention to the internet and new media, which controls advertising consumption and causes a decrease in audience size.[25]A study conducted by Salesforce shows an overwhelming 8% of customers acknowledge that their experience with the business is equivalent to the quality of its products or services.[citation needed]Therefore, it is important to prioritize customer engagement as a business strategy. The proliferation of media that provide consumers with more control over their advertising consumption (subscription-based digital radio and TV) and the simultaneous decrease of trust in advertising and increase of trust in peers[19]point to the need for communications that the customer will desire to engage with. Stimulating a consumer's engagement with a brand is the only way to increase brand loyalty and, therefore, "the best measure of current and future performance".[25] CE behaviour became prominent with the advent of the social phenomenon of online CE. Creating and stimulating customer engagement behaviour has recently become an explicit aim of both profit and non-profit organisations in the belief that engaging target customers to a high degree is conducive to furthering business objectives. Shevlin's definition of CE is well suited to understanding the process that leads to an engaged customer. In its adaptation by Richard Sedley the key word is 'investment'. "Repeated interactions that strengthen the emotional, psychological or physical investment a customer has in a brand."[This quote needs a citation] A customer's degree of engagement with a company lies in a continuum that represents the strength of his investment in that company. Positive experiences with the company strengthen that investment and move the customer down the line of engagement. What is important in measuring degrees of involvement is the ability of defining and quantifying the stages on the continuum. One popular suggestion is a four-level model adapted from Kirkpatrick's Levels: Concerns have, however, been expressed as regards the measurability of stages three and four. Another popular suggestion isGhuneim'stypology of engagement.[26] The following consumer typology according to degree of engagement fits also into Ghuneim's continuum: creators (smallest group), critics, collectors, couch potatoes (largest group).[27] Engagement is a holistic characterization of a consumer's behavior, encompassing a host of sub-aspects of behaviour such as loyalty, satisfaction, involvement, word-of-mouth advertising, complaining and more. The behavioural outcomes of an engaged consumer are what links CE to profits. From this point of view, "CE is the best measure of current and future performance; an engaged relationship is probably the only guarantee for a return on your organization's or your clients' objectives."[28]Simply attaining a high level of customer satisfaction does not seem to guarantee the customer's business. 60% to 80% of customers who defect to a competitor said they were satisfied or very satisfied on the survey just prior to their defection.[2]: 32 The main difference between traditional and customer engagement marketing is marked by these shifts: Specific marketing practices involve: All marketing practices, includinginternet marketing, include measuring the effectiveness of various media along the customer engagement cycle, as consumers travel from awareness to purchase. Often the use ofCVP Analysisfactors into strategy decisions, including budgets and media placement. The CE metric is useful for: a) Planning: b) Measuring Effectiveness: Measure how successful CE-marketing efforts have been at engaging target customers. The importance of CE as a marketing metric is reflected in ARF's statement: "The industry is moving toward customer engagement with marketing communications as the 21st century metric of marketing efficiency and effectiveness."[29] ARF envisages CE exclusively as a metric of engagement with communication, but it is not necessary to distinguish between engaging with the communication and with the product since CE behaviour deals with, and is influenced by, involvement with both. In order to be operational, CE-metrics must be combined with psychodemographics. It is not enough to know that a website has 500 highly engaged members, for instance; it is imperative to know what percentage are members of the company's target market.[30]As a metric for effectiveness, Scott Karp suggests, CE is the solution to the same intractable problems that have long been a struggle for old media: how to prove value.[31] The CE-metric is synthetic and integrates a number of variables. The World Federation of Advertisers calls it 'consumer-centric holistic measurement'.[32]The following items have all been proposed as components of a CE-metric: Root metrics Action metrics In selecting the components of a CE-metric, the following issues must be resolved:
https://en.wikipedia.org/wiki/Customer_engagement
Super Wi-Firefers toIEEE 802.11g/n/ac/axWi-Fi implementations over unlicensed 2.4 and 5GHzWi-Fibands but with performance enhancements for antenna control, multiple path beam selection, advance control for best path, and applied intelligence for load balancing giving it bi-directional connectivity range for standard wifi enabled devices over distances of up to 1,700 meters. Hong Kong–based Altai Technologies[1]developed and patented Super Wi-Fi technology and manufacturers a product line of base stations and access points deployed extensively around the world beginning in 2007. Due to its extended range and advanced interference mitigation, Super Wi-Fi is primarily used for expansive outdoor and heavy industrial use cases.[2]Krysp Wireless, LLC[3]is Altai Technologies' Master Distributor for North America focused on the sale and distribution of Super Wi-Fi products for large enterprises, WISPs and municipal deployments. Altai's Super Wi-Fi technology should not be confused with the FCC's use of the term relating to proposed plans announced in 2012 for using TV white space spectrum to support delivery of long range internet access. Super Wi-Fi is a term originally coined by the United StatesFederal Communications Commission(FCC) to describe awireless networkingproposal which the FCC plans to use for the creation of longer-distance wirelessInternet access.[4][5]The use of the trademark "Wi-Fi" in the name has been criticized because it is neither based on Wi-Fi technology nor endorsed by theWi-Fi Alliance.[4]A trade show has also been called the "Super WiFi Summit" (without hyphen).[6] Various standards such asIEEE 802.22andIEEE 802.11afhave been proposed for this concept. The term "White-Fi"[7]has also been used to indicate the use of white space for IEEE 802.11af.[8][9] Altai Technologies' Super Wi-Fi leverages a dynamic use of unlicensed 2.4 and 5 GHz bands to seamlessly migrate nomadic device connections from one band to the other depending on their distance from the Super Wi-Fi base station/access point. This dynamic use of both unlicensed bands combined with patented throughput optimization and interference mitigation is what supports Super Wi-Fi's extended range. Conversely, The FCC's Super Wi-Fi proposal is a network backhaul solution that uses the lower-frequencywhite spacesbetweentelevision channel frequencies.[10]These lower frequencies allow the signal to travel further and penetrate walls better than the higher frequencies previously used.[10]The FCC's plan was to allow those white space frequencies to be used for free, as happens with shorter-range Wi-Fi andBluetooth.[10]However, due to concerns of interference to broadcast, Super Wi-Fi devices cannot access the TV spectrum at will. The FCC has made mandatory the utilization of aTV white space database(also referred to as ageolocation database), which must be accessed by Super Wi-Fi devices before the latter gain access to the VHF-UHF spectrum. The white space database evaluates the potential for interference to broadcast and either grants or denies access of Super Wi-Fi devices to the VHF-UHF spectrum. Continuing research exists evaluating the potential for Super Wi-Fi Networks for coverage and performance.[11][12] Altai Technologies' Super Wi-Fi deployment use cases around the world include container ports, heavy industrial complexes, campus environments, mining operations, agriculture and airports among others.[13]Proof of concept deployments for the FCC's Super Wi-Fi initiative leveraging TV white space includeRice University, in partnership with the nonprofit organizationTechnology For All, installed the first residential deployment of Super Wi-Fi in east Houston in April 2011. The network uses white spaces forbackhauland provides access to clients using 2.4GHzWi-Fi.[14]A month later, a public Super Wi-Fi network was deployed inCalgary,Alberta, when Calgary-based company WestNet Wireless launched the network for free and paid subscribers.[15]The United States' first public Super Wi-Fi network was deployed inWilmington, North Carolina, on January 26, 2012.Florida-based company Spectrum Bridge launched a network for public use with access at Hugh MacRae park.[16]West Virginia Universitylaunched the first campus Super Wi-Fi network on July 9, 2013.[17] Currently, Microsoft is using TV Whitespaces to provide Super Wi-Fi connectivity in select regions across Africa, Asia, North America, and South America.[18]This is after running successful trials back in 2012 in countries such as Belgium, Kenya, Switzerland, Singapore, the United Kingdom, the United States, and Uruguay. As of 2021, Microsoft runs the service underProject Mawinguin Microsoft 4Afrika to provide low-cost internet access within rural communities in the African continent.[19]The countries served include the likes of Kenya, Namibia, Tanzania, South Africa, Ghana and Botswana.
https://en.wikipedia.org/wiki/Super_Wi-Fi
Single sign-on(SSO) is an authentication scheme that allows a user to log in with a single ID to any of several related, yet independent, software systems. True single sign-on allows the user to log in once and access services without re-entering authentication factors. It should not be confused with same-sign on (Directory Server Authentication), often accomplished by using theLightweight Directory Access Protocol(LDAP) and stored LDAP databases on (directory) servers.[1][2] A simple version of single sign-on can be achieved overIP networksusingcookiesbut only if the sites share a common DNS parent domain.[3] For clarity, a distinction is made between Directory Server Authentication (same-sign on) and single sign-on: Directory Server Authentication refers to systems requiring authentication for each application but using the same credentials from a directory server, whereas single sign-on refers to systems where a single authentication provides access to multiple applications by passing the authentication token seamlessly to configured applications. Conversely,single sign-offorsingle log-out(SLO) is the property whereby a single action of signing out terminates access to multiple software systems. As different applications and resources support differentauthenticationmechanisms, single sign-on must internally store the credentials used for initial authentication and translate them to the credentials required for the different mechanisms. Other shared authentication schemes, such asOpenIDandOpenID Connect, offer other services that may require users to make choices during a sign-on to a resource, but can be configured for single sign-on if those other services (such as user consent) are disabled. An increasing number of federated social logons, likeFacebook Connect, do require the user to enter consent choices upon first registration with a new resource, and so are not always single sign-on in the strictest sense. Benefits of using single sign-on include: SSO shares centralizedauthentication serversthat all other applications and systems use for authentication purposes and combines this with techniques to ensure that users do not have to actively enter their credentials more than once. The termreduced sign-on(RSO) has been used by some to reflect the fact thatsingle sign-onis impractical in addressing the need for different levels of secure access in the enterprise, and as such more than one authentication server may be necessary.[6] As single sign-on provides access to many resources once the user is initially authenticated ("keys to the castle"), it increases the negative impact in case the credentials are available to other people and misused. Therefore, single sign-on requires an increased focus on the protection of the user credentials, and should ideally be combined with strong authentication methods likesmart cardsandone-time passwordtokens.[6] Single sign-on also increases dependence on highly-available authentication systems; a loss of their availability can result in denial of access to all systems unified under the SSO. SSO can be configured with session failover capabilities in order to maintain the system operation.[7]Nonetheless, the risk of system failure may make single sign-on undesirable for systems to which access must be guaranteed at all times, such as security or plant-floor systems. Furthermore, the use of single-sign-on techniques utilizingsocial networking servicessuch asFacebookmay render third party websites unusable within libraries, schools, or workplaces that block social media sites for productivity reasons. It can also cause difficulties in countries with activecensorshipregimes, such asChinaand its "Golden Shield Project", where the third party website may not be actively censored, but is effectively blocked if a user's social login is blocked.[8][9] In March 2012,[10]a research paper reported an extensive study on the security ofsocial loginmechanisms. The authors found 8 serious logic flaws in high-profile ID providers and relying party websites, such asOpenID(includingGoogle IDand PayPal Access),Facebook,Janrain,Freelancer,FarmVille, andSears.com. Because the researchers informed ID providers and relying party websites prior to public announcement of the discovery of the flaws, the vulnerabilities were corrected, and no security breaches have been reported.[11] In May 2014, a vulnerability namedCovert Redirectwas disclosed.[12]It was first reported "Covert Redirect Vulnerability Related toOAuth 2.0and OpenID" by its discoverer Wang Jing, a Mathematical PhD student fromNanyang Technological University, Singapore.[13][14][15]In fact, almost all[weasel words]Single sign-on protocols are affected. Covert Redirect takes advantage of third-party clients susceptible tocross-site scripting(XSS) oropen redirect.[16] In December 2020, flaws in federated authentication systems were discovered to have been utilized by attackers during the2020 United States federal government data breach.[17][18] Due to how single sign-on works, by sending a request to the logged-in website to get a SSO token and sending a request with the token to the logged-out website, the token cannot be protected with theHttpOnlycookie flag and thus can be stolen by an attacker if there is an XSS vulnerability on the logged-out website, in order to dosession hijacking. Another security issue is that if the session used for SSO is stolen (which can be protected with the HttpOnly cookie flag unlike the SSO token), the attacker can access all the websites that are using the SSO system. As originally implemented in Kerberos andSAML, single sign-on did not give users any choices about releasing their personal information to each new resource that the user visited. This worked well enough within a single enterprise, like MIT where Kerberos was invented, or major corporations where all of the resources were internal sites. However, as federated services likeActive Directory Federation Servicesproliferated, the user'sprivate informationwas sent out to affiliated sites not under control of the enterprise that collected the data from the user. Sinceprivacy regulationsare now tightening with legislation like theGDPR, the newer methods likeOpenID Connecthave started to become more attractive; for example MIT, the originator of Kerberos, now supportsOpenID Connect.[19] Single sign-on in theory can work without revealing identifying information such as email addresses to the relying party (credential consumer), but many credential providers do not allow users to configure what information is passed on to the credential consumer. As of 2019, Google and Facebook sign-in do not require users to share email addresses with the credential consumer. "Sign in with Apple" introduced iniOS 13allows a user to request a unique relay email address each time the user signs up for a new service, thus reducing the likelihood of account linking by the credential consumer.[20] Windowsenvironment - Windows login fetches TGT.Active Directory-aware applications fetch service tickets, so the user is not prompted to re-authenticate. Unix/Linuxenvironment - Login via KerberosPAMmodules fetches TGT. Kerberized client applications such asEvolution,Firefox, andSVNuse service tickets, so the user is not prompted to re-authenticate. Initial sign-on prompts the user for thesmart card. Additionalsoftware applicationsalso use the smart card, without prompting the user to re-enter credentials. Smart-card-based single sign-on can either use certificates or passwords stored on the smart card. Integrated Windows Authenticationis a term associated withMicrosoftproducts and refers to theSPNEGO,Kerberos, andNTLMSSPauthentication protocols with respect toSSPIfunctionality introduced with MicrosoftWindows 2000and included with laterWindows NT-based operating systems. The term is most commonly used to refer to the automatically authenticated connections between MicrosoftInternet Information ServicesandInternet Explorer. Cross-platformActive Directoryintegration vendors have extended the Integrated Windows Authentication paradigm to Unix (including Mac) and Linux systems. Security Assertion Markup Language(SAML) is anXML-based method for exchanging user security information between anSAML identity providerand aSAML service provider.SAML 2.0supportsW3CXML encryption and service-provider–initiated web browser single sign-on exchanges.[21]A user wielding a user agent (usually a web browser) is called the subject in SAML-based single sign-on. The user requests a web resource protected by a SAML service provider. The service provider, wishing to know the identity of the user, issues an authentication request to a SAML identity provider through the user agent. The identity provider is the one that provides the user credentials. The service provider trusts theuser informationfrom the identity provider to provide access to its services or resources. A newer variation of single-sign-on authentication has been developed using mobile devices as access credentials. Users' mobile devices can be used to automatically log them onto multiple systems, such as building-access-control systems and computer systems, through the use of authentication methods which includeOpenID Connectand SAML,[22]in conjunction with anX.509ITU-Tcryptographycertificate used to identify the mobile device to an access server. A mobile device is "something you have", as opposed to a password which is "something you know", or biometrics (fingerprint, retinal scan, facial recognition, etc.) which is "something you are". Security experts recommend using at least two out of these three factors (multi-factor authentication) for best protection.
https://en.wikipedia.org/wiki/Single_sign-on
Inengineeringandsystems theory,redundancyis the intentional duplication of critical components or functions of a system with the goal of increasing reliability of thesystem, usually in the form of a backup orfail-safe, or to improve actual system performance, such as in the case ofGNSSreceivers, ormulti-threadedcomputer processing. In manysafety-critical systems, such asfly-by-wireandhydraulicsystems inaircraft, some parts of the control system may be triplicated,[1]which is formally termedtriple modular redundancy(TMR). An error in one component may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails. Since each one rarely fails, and the sub components are designed to preclude common failure modes (which can then be modelled as independent failure), the probability of all three failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such ashuman error.Electrical surgesarising fromlightningstrikes are an example of a failure mode which is difficult to fully isolate, unless the components are powered from independent power busses and have no direct electrical pathway in their interconnect (communication by some means is required for voting). Redundancy may also be known by the terms "majority voting systems"[2]or "voting logic".[3] Redundancy sometimes produces less, instead of greater reliability – it creates a more complex system which is prone to various issues, it may lead to human neglect of duty, and may lead to higher production demands which by overstressing the system may make it less safe.[4] Redundancy is one form ofrobustnessas practiced incomputer science. Geographic redundancyhas become important in thedata centerindustry, to safeguard data againstnatural disastersandpolitical instability(see below). In computer science, there are four major forms of redundancy:[5] A modified form of software redundancy, applied to hardware may be: Structuresare usually designed with redundant parts as well, ensuring that if one part fails, the entire structure will not collapse. A structure without redundancy is calledfracture-critical, meaning that a single broken component can cause the collapse of the entire structure. Bridges that failed due to lack of redundancy include theSilver Bridgeand theInterstate 5 bridge over the Skagit River. Parallel and combined systems demonstrate different level of redundancy. The models are subject of studies in reliability and safety engineering.[6] Unlike traditional redundancy, which uses more than one of the same thing, dissimilar redundancy uses different things. The idea is that the different things are unlikely to contain identical flaws. The voting method may involve additional complexity if the two things take different amounts of time. Dissimilar redundancy is often used with software, because identical software contains identical flaws. The chance of failure is reduced by using at least two different types of each of the following Geographic redundancy corrects the vulnerabilities of redundant devices deployed by geographically separating backup devices. Geographic redundancy reduces the likelihood of events such aspower outages,floods,HVACfailures,lightningstrikes,tornadoes, building fires,wildfires, andmass shootingsdisabling most of the system if not the entirety of it. Geographic redundancy locations can be The following methods can reduce the risks of damage by a fireconflagration: Geographic redundancy is used by Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Netflix, Dropbox, Salesforce, LinkedIn, PayPal, Twitter, Facebook, Apple iCloud, Cisco Meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services.[15] As another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles (3.2 km) away from the shore, with an elevation of at least 5 feet (1.5 m) above sea level. For additional protection, they can be located at least 100 feet (30 m) away from flood plain areas.[16][17] The two functions of redundancy are passive redundancy andactive redundancy. Both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. Passive redundancy uses excess capacity to reduce the impact of component failures. One common form of passive redundancy is the extra strength of cabling and struts used in bridges. This extra strength allows some structural components to fail without bridge collapse. The extra strength used in the design is called the margin of safety. Eyes and ears provide working examples of passive redundancy. Vision loss in one eye does not cause blindness butdepth perceptionis impaired. Hearing loss in one ear does not causedeafnessbut directionality is lost. Performance decline is commonly associated with passive redundancy when a limited number of failures occur. Active redundancy eliminates performance declines by monitoring the performance of individual devices, and this monitoring is used in voting logic. The voting logic is linked to switching that automatically reconfigures the components. Error detection and correction and theGlobal Positioning System(GPS) are two examples of active redundancy. Electrical power distributionprovides an example of active redundancy. Several power lines connect each generation facility with customers. Each power line includes monitors that detect overload. Each power line also includes circuit breakers. The combination of power lines provides excess capacity. Circuit breakers disconnect a power line when monitors detect an overload. Power is redistributed across the remaining lines.[citation needed]At the Toronto Airport, there are 4 redundant electrical lines. Each of the 4 lines supply enough power for the entire airport. Aspot network substationuses reverse current relays to open breakers to lines that fail, but lets power continue to flow the airport. Electrical power systems use power scheduling to reconfigure active redundancy. Computing systems adjust the production output of each generating facility when other generating facilities are suddenly lost. This prevents blackout conditions during major events such as an earthquake. Charles Perrow, author ofNormal Accidents, has said that sometimes redundancies backfire and produce less, not more reliability. This may happen in three ways: First, redundant safety devices result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to shirking of responsibility among workers. Third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely.[4] Voting logic uses performance monitoring to determine how to reconfigure individual components so that operation continues without violating specification limitations of the overall system. Voting logic often involves computers, but systems composed of items other than computers may be reconfigured using voting logic. Circuit breakers are an example of a form of non-computer voting logic. The simplest voting logic in computing systems involves two components: primary and alternate. They both run similar software, but the output from the alternate remains inactive during normal operation. The primary monitors itself and periodically sends an activity message to the alternate as long as everything is OK. All outputs from the primary stop, including the activity message, when the primary detects a fault. The alternate activates its output and takes over from the primary after a brief delay when the activity message ceases. Errors in voting logic can cause both outputs to be active or inactive at the same time, or cause outputs to flutter on and off. A more reliable form of voting logic involves an odd number of three devices or more. All perform identical functions and the outputs are compared by the voting logic. The voting logic establishes a majority when there is a disagreement, and the majority will act to deactivate the output from other device(s) that disagree. A single fault will not interrupt normal operation. This technique is used withavionicssystems, such as those responsible for operation of theSpace Shuttle. Each duplicate component added to the system decreases the probability of system failure according to the formula:- where: This formula assumes independence of failure events. That means that the probability of a component B failing given that a component A has already failed is the same as that of B failing when A has not failed. There are situations where this is unreasonable, such as using twopower suppliesconnected to the same socket in such a way that if onepower supplyfailed, the other would too. It also assumes that only one component is needed to keep the system running. You can achieve higheravailabilitythrough redundancy. Let's say you have three redundant components: A, B and C. You can use following formula to calculateavailabilityof the overall system: Availability of redundant components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C)[18][19] In corollary, if you have N parallel components each having X availability, then: Availability of parallel components = 1 - (1 - X)^ N Using redundant components can exponentially increase the availability of overall system.[19]For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability. Note that redundancy doesn't always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that:[20]
https://en.wikipedia.org/wiki/Redundancy_(engineering)
Interval arithmetic(also known asinterval mathematics;interval analysisorinterval computation) is a mathematical technique used to mitigateroundingandmeasurement errorsinmathematical computationby computing functionbounds.Numerical methodsinvolving intervalarithmeticcan guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as arange of possibilities. Mathematically, instead of working with an uncertainreal-valuedvariablex{\displaystyle x}, interval arithmetic works with an interval[a,b]{\displaystyle [a,b]}that defines the range of values thatx{\displaystyle x}can have. In other words, any value of the variablex{\displaystyle x}lies in the closed interval betweena{\displaystyle a}andb{\displaystyle b}. A functionf{\displaystyle f}, when applied tox{\displaystyle x}, produces an interval[c,d]{\displaystyle [c,d]}which includes all the possible values forf(x){\displaystyle f(x)}for allx∈[a,b]{\displaystyle x\in [a,b]}. Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track ofrounding errorsin calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such asdifferential equations) andoptimization problems. The main objective of intervalarithmeticis to provide a simple way of calculatingupper and lower boundsof a function's range in one or more variables. These endpoints are not necessarily the truesupremumorinfimumof a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset. This treatment is typically limited to real intervals, so quantities in the form wherea=−∞{\displaystyle a={-\infty }}andb=∞{\displaystyle b={\infty }}are allowed. With one ofa{\displaystyle a},b{\displaystyle b}infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real numberr{\displaystyle r}can be interpreted as the interval[r,r],{\displaystyle [r,r],}intervals and real numbers can be freely combined. Consider the calculation of a person'sbody mass index(BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight ofexactly80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval[79.5,80.5){\displaystyle [79.5,80.5)}. The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval[24.537,24.846]{\displaystyle [24.537,24.846]}. Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight. The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion. The range of BMI examples could be reported as[24.5,24.9]{\displaystyle [24.5,24.9]}since this interval is a superset of the calculated interval. The range could not, however, be reported as[24.6,24.8]{\displaystyle [24.6,24.8]}, as the interval does not contain possible BMI values. Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval[1.785,1.795){\displaystyle [1.785,1.795)}. Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. A binary operation⋆{\displaystyle \star }on two intervals, such as addition or multiplication is defined by In other words, it is the set of all possible values ofx⋆y{\displaystyle x\star y}, wherex{\displaystyle x}andy{\displaystyle y}are in their corresponding intervals. If⋆{\displaystyle \star }ismonotonefor each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains0{\displaystyle 0}), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is provided thatx⋆y{\displaystyle x\star y}is defined for allx∈[x1,x2]{\displaystyle x\in [x_{1},x_{2}]}andy∈[y1,y2]{\displaystyle y\in [y_{1},y_{2}]}. For practical applications, this can be simplified further: The last case loses useful information about the exclusion of(1/y1,1/y2){\displaystyle (1/y_{1},1/y_{2})}. Thus, it is common to work with[−∞,1y1]{\displaystyle \left[-\infty ,{\tfrac {1}{y_{1}}}\right]}and[1y2,∞]{\displaystyle \left[{\tfrac {1}{y_{2}}},\infty \right]}as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-calledmulti-intervalsof the form⋃i[ai,bi].{\textstyle \bigcup _{i}\left[a_{i},b_{i}\right].}The correspondingmulti-interval arithmeticmaintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite.[1] Interval multiplication often only requires two multiplications. Ifx1{\displaystyle x_{1}},y1{\displaystyle y_{1}}are nonnegative, The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest. With the help of these definitions, it is already possible to calculate the range of simple functions, such asf(a,b,x)=a⋅x+b.{\displaystyle f(a,b,x)=a\cdot x+b.}For example, ifa=[1,2]{\displaystyle a=[1,2]},b=[5,7]{\displaystyle b=[5,7]}andx=[2,3]{\displaystyle x=[2,3]}: To shorten the notation of intervals, brackets can be used. [x]≡[x1,x2]{\displaystyle [x]\equiv [x_{1},x_{2}]}can be used to represent an interval. Note that in such a compact notation,[x]{\displaystyle [x]}should not be confused between a single-point interval[x1,x1]{\displaystyle [x_{1},x_{1}]}and a general interval. For the set of all intervals, we can use as an abbreviation. For a vector of intervals([x]1,…,[x]n)∈[R]n{\displaystyle \left([x]_{1},\ldots ,[x]_{n}\right)\in [\mathbb {R} ]^{n}}we can use a bold font:[x]{\displaystyle [\mathbf {x} ]}. Interval functions beyond the four basic operators may also be defined. Formonotonic functionsin one variable, the range of values is simple to compute. Iff:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }is monotonically increasing (resp. decreasing) in the interval[x1,x2],{\displaystyle [x_{1},x_{2}],}then for ally1,y2∈[x1,x2]{\displaystyle y_{1},y_{2}\in [x_{1},x_{2}]}such thaty1<y2,{\displaystyle y_{1}<y_{2},}f(y1)≤f(y2){\displaystyle f(y_{1})\leq f(y_{2})}(resp.f(y2)≤f(y1){\displaystyle f(y_{2})\leq f(y_{1})}). The range corresponding to the interval[y1,y2]⊆[x1,x2]{\displaystyle [y_{1},y_{2}]\subseteq [x_{1},x_{2}]}can be therefore calculated by applying the function to its endpoints: From this, the following basic features for interval functions can easily be defined: For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example,xn{\displaystyle x^{n}}forx∈[−1,1]{\displaystyle x\in [-1,1]}should produce the interval[0,1]{\displaystyle [0,1]}whenn=2,4,6,….{\displaystyle n=2,4,6,\ldots .}But if[−1,1]n{\displaystyle [-1,1]^{n}}is taken by repeating interval multiplication of form[−1,1]⋅[−1,1]⋅⋯⋅[−1,1]{\displaystyle [-1,1]\cdot [-1,1]\cdot \cdots \cdot [-1,1]}then the result is[−1,1],{\displaystyle [-1,1],}wider than necessary. More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpointsx1{\displaystyle x_{1}},x2{\displaystyle x_{2}}of an interval, together with the so-calledcritical pointswithin the interval, being those points where the monotonicity of the function changes direction. For thesineandcosinefunctions, the critical points are at(12+n)π{\displaystyle \left({\tfrac {1}{2}}+n\right)\pi }ornπ{\displaystyle n\pi }forn∈Z{\displaystyle n\in \mathbb {Z} }, respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is[−1,1]{\displaystyle [-1,1]}if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely −1, 0, and 1. In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. Iff:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a function from a real vector to a real number, then[f]:[R]n→[R]{\displaystyle [f]:[\mathbb {R} ]^{n}\to [\mathbb {R} ]}is called aninterval extensionoff{\displaystyle f}if This definition of the interval extension does not give a precise result. For example, both[f]([x1,x2])=[ex1,ex2]{\displaystyle [f]([x_{1},x_{2}])=[e^{x_{1}},e^{x_{2}}]}and[g]([x1,x2])=[−∞,∞]{\displaystyle [g]([x_{1},x_{2}])=[{-\infty },{\infty }]}are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case,[f]{\displaystyle [f]}should be chosen as it gives the tightest possible result. Given a real expression, itsnatural interval extensionis achieved by using the interval extensions of each of its subexpressions, functions, and operators. TheTaylor interval extension(of degreek{\displaystyle k}) is ak+1{\displaystyle k+1}times differentiable functionf{\displaystyle f}defined by for somey∈[x]{\displaystyle \mathbf {y} \in [\mathbf {x} ]}, whereDif(y){\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )}is thei{\displaystyle i}-th order differential off{\displaystyle f}at the pointy{\displaystyle \mathbf {y} }and[r]{\displaystyle [r]}is an interval extension of theTaylor remainder. The vectorξ{\displaystyle \xi }lies betweenx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }withx,y∈[x]{\displaystyle \mathbf {x} ,\mathbf {y} \in [\mathbf {x} ]},ξ{\displaystyle \xi }is protected by[x]{\displaystyle [\mathbf {x} ]}. Usually one choosesy{\displaystyle \mathbf {y} }to be the midpoint of the interval and uses the natural interval extension to assess the remainder. The special case of the Taylor interval extension of degreek=0{\displaystyle k=0}is also referred to as themean value form. An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers tocomplex numbers.[2]Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair ofreal numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers.[3]Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages.[3] The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic.[3]It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers.[3]Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.[3] Interval arithmetic can be extended, in an analogous manner, to other multidimensionalnumber systemssuch asquaternionsandoctonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.[3] The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account. To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the functionf(x,y)=x+y{\displaystyle f(x,y)=x+y}forx∈[0.1,0.8]{\displaystyle x\in [0.1,0.8]}andy∈[0.06,0.08]{\displaystyle y\in [0.06,0.08]}are for example[0.16,0.88]{\displaystyle [0.16,0.88]}. Where the same calculation is done with single-digit precision, the result would normally be[0.2,0.9]{\displaystyle [0.2,0.9]}. But[0.2,0.9]⊉[0.16,0.88]{\displaystyle [0.2,0.9]\not \supseteq [0.16,0.88]}, so this approach would contradict the basic principles of interval arithmetic, as a part of the domain off([0.1,0.8],[0.06,0.08]){\displaystyle f([0.1,0.8],[0.06,0.08])}would be lost. Instead, the outward rounded solution[0.1,0.9]{\displaystyle [0.1,0.9]}is used. The standardIEEE 754for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down). The requiredexternal roundingfor interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval[ε1,ε2]{\displaystyle [\varepsilon _{1},\varepsilon _{2}]}can be added. The so-called "dependency" problemis a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals. As an illustration, take the functionf{\displaystyle f}defined byf(x)=x2+x.{\displaystyle f(x)=x^{2}+x.}The values of this function over the interval[−1,1]{\displaystyle [-1,1]}are[−14,2].{\displaystyle \left[-{\tfrac {1}{4}},2\right].}As the natural interval extension, it is calculated as: which is slightly larger; we have instead calculated theinfimum and supremumof the functionh(x,y)=x2+y{\displaystyle h(x,y)=x^{2}+y}overx,y∈[−1,1].{\displaystyle x,y\in [-1,1].}There is a better expression off{\displaystyle f}in which the variablex{\displaystyle x}only appears once, namely by rewritingf(x)=x2+x{\displaystyle f(x)=x^{2}+x}as addition and squaring in thequadratic. So the suitable interval calculation is and gives the correct values. In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and iff{\displaystyle f}is continuous inside the box. However, not every function can be rewritten this way. The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions. An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system is precisely the line between the points(−1,−1){\displaystyle (-1,-1)}and(1,1).{\displaystyle (1,1).}Using interval methods results in the unit square,[−1,1]×[−1,1].{\displaystyle [-1,1]\times [-1,1].}This is known as thewrapping effect. A linear interval system consists of a matrix interval extension[A]∈[R]n×m{\displaystyle [\mathbf {A} ]\in [\mathbb {R} ]^{n\times m}}and an interval vector[b]∈[R]n{\displaystyle [\mathbf {b} ]\in [\mathbb {R} ]^{n}}. We want the smallest cuboid[x]∈[R]m{\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{m}}, for all vectorsx∈Rm{\displaystyle \mathbf {x} \in \mathbb {R} ^{m}}which there is a pair(A,b){\displaystyle (\mathbf {A} ,\mathbf {b} )}withA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}andb∈[b]{\displaystyle \mathbf {b} \in [\mathbf {b} ]}satisfying. For quadratic systems – in other words, forn=m{\displaystyle n=m}– there can be such an interval vector[x]{\displaystyle [\mathbf {x} ]}, which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known asGaussian eliminationbecomes its interval version. However, since this method uses the interval entities[A]{\displaystyle [\mathbf {A} ]}and[b]{\displaystyle [\mathbf {b} ]}repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it. A rough solution[x]{\displaystyle [\mathbf {x} ]}can often be improved by an interval version of theGauss–Seidel method. The motivation for this is that thei{\displaystyle i}-th row of the interval extension of the linear equation. can be determined by the variablexi{\displaystyle x_{i}}if the division1/[aii]{\displaystyle 1/[a_{ii}]}is allowed. It is therefore simultaneously. So we can now replace[xj]{\displaystyle [x_{j}]}by and so the vector[x]{\displaystyle [\mathbf {x} ]}by each element. Since the procedure is more efficient for adiagonally dominant matrix, instead of the system[A]⋅x=[b],{\displaystyle [\mathbf {A} ]\cdot \mathbf {x} =[\mathbf {b} ]{\mbox{,}}}one can often try multiplying it by an appropriate rational matrixM{\displaystyle \mathbf {M} }with the resulting matrix equation. left to solve. If one chooses, for example,M=A−1{\displaystyle \mathbf {M} =\mathbf {A} ^{-1}}for the central matrixA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}, thenM⋅[A]{\displaystyle \mathbf {M} \cdot [\mathbf {A} ]}is outer extension of the identity matrix. These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matricesA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors. This is only suitable for systems of smaller dimension, since with a fully occupiedn×n{\displaystyle n\times n}matrix,2n2{\displaystyle 2^{n^{2}}}real matrices need to be inverted, with2n{\displaystyle 2^{n}}vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed.[4] An interval variant ofNewton's methodfor finding the zeros in an interval vector[x]{\displaystyle [\mathbf {x} ]}can be derived from the average value extension.[5]For an unknown vectorz∈[x]{\displaystyle \mathbf {z} \in [\mathbf {x} ]}applied toy∈[x]{\displaystyle \mathbf {y} \in [\mathbf {x} ]}, gives. For a zeroz{\displaystyle \mathbf {z} }, that isf(z)=0{\displaystyle f(z)=0}, and thus, must satisfy. This is equivalent toz∈y−[Jf]([x])−1⋅f(y){\displaystyle \mathbf {z} \in \mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )}. An outer estimate of[Jf]([x])−1⋅f(y)){\displaystyle [J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} ))}can be determined using linear methods. In each step of the interval Newton method, an approximate starting value[x]∈[R]n{\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{n}}is replaced by[x]∩(y−[Jf]([x])−1⋅f(y)){\displaystyle [\mathbf {x} ]\cap \left(\mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )\right)}and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros off{\displaystyle f}were in the initial range[x]{\displaystyle [\mathbf {x} ]}if a Newton step produces the empty set. The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by thebisection method. As an example, consider the functionf(x)=x2−2{\displaystyle f(x)=x^{2}-2}, the starting range[x]=[−2,2]{\displaystyle [x]=[-2,2]}, and the pointy=0{\displaystyle y=0}. We then haveJf(x)=2x{\displaystyle J_{f}(x)=2\,x}and the first Newton step gives. More Newton steps are used separately onx∈[−2,−0.5]{\displaystyle x\in [{-2},{-0.5}]}and[0.5,2]{\displaystyle [{0.5},{2}]}. These converge to arbitrarily small intervals around−2{\displaystyle -{\sqrt {2}}}and+2{\displaystyle +{\sqrt {2}}}. The Interval Newton method can also be used withthick functionssuch asg(x)=x2−[2,3]{\displaystyle g(x)=x^{2}-[2,3]}, which would in any case have interval results. The result then produces intervals containing[−3,−2]∪[2,3]{\displaystyle \left[-{\sqrt {3}},-{\sqrt {2}}\right]\cup \left[{\sqrt {2}},{\sqrt {3}}\right]}. The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals. Covering an interval vector[x]{\displaystyle [\mathbf {x} ]}by smaller boxes[x1],…,[xk],{\displaystyle [\mathbf {x} _{1}],\ldots ,[\mathbf {x} _{k}],}so that is then valid for the range of values. So, for the interval extensions described above the following holds: Since[f]([x]){\displaystyle [f]([\mathbf {x} ])}is often a genuinesupersetof the right-hand side, this usually leads to an improved estimate. Such a cover can be generated by thebisection methodsuch as thick elements[xi1,xi2]{\displaystyle [x_{i1},x_{i2}]}of the interval vector[x]=([x11,x12],…,[xn1,xn2]){\displaystyle [\mathbf {x} ]=([x_{11},x_{12}],\ldots ,[x_{n1},x_{n2}])}by splitting in the center into the two intervals[xi1,12(xi1+xi2)]{\displaystyle \left[x_{i1},{\tfrac {1}{2}}(x_{i1}+x_{i2})\right]}and[12(xi1+xi2),xi2].{\displaystyle \left[{\tfrac {1}{2}}(x_{i1}+x_{i2}),x_{i2}\right].}If the result is still not suitable then further gradual subdivision is possible. A cover of2r{\displaystyle 2^{r}}intervals results fromr{\displaystyle r}divisions of vector elements, substantially increasing the computation costs. With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known asmincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension. Interval arithmetic can be used in various areas (such asset inversion,motion planning,set estimation, or stability analysis) to treat estimates with no exact numerical value.[6] Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly: Interval analysis adds to rather than substituting for traditional methods for error reduction, such aspivoting. Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.[1] If the behavior of such a system affected by tolerances satisfies, for example,f(x,p)=0{\displaystyle f(\mathbf {x} ,\mathbf {p} )=0}, forp∈[p]{\displaystyle \mathbf {p} \in [\mathbf {p} ]}and unknownx{\displaystyle \mathbf {x} }then the set of possible solutions. can be found by interval methods. This provides an alternative to traditionalpropagation of erroranalysis. Unlike point methods, such asMonte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered. Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used infuzzy logic. Apart from the strict statementsx∈[x]{\displaystyle x\in [x]}andx∉[x]{\displaystyle x\not \in [x]}, intermediate values are also possible, to which real numbersμ∈[0,1]{\displaystyle \mu \in [0,1]}are assigned.μ=1{\displaystyle \mu =1}corresponds to definite membership whileμ=0{\displaystyle \mu =0}is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval. Forfuzzy arithmetic[7]only a finite number of discrete affiliation stagesμi∈[0,1]{\displaystyle \mu _{i}\in [0,1]}are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals. The interval[x(i)]{\displaystyle \left[x^{(i)}\right]}corresponds exactly to the fluctuation range for the stageμi.{\displaystyle \mu _{i}.} The appropriate distribution for a functionf(x1,…,xn){\displaystyle f(x_{1},\ldots ,x_{n})}concerning indistinct valuesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}and the corresponding sequences. can be approximated by the sequence. where and can be calculated by interval methods. The value[y(1)]{\displaystyle \left[y^{(1)}\right]}corresponds to the result of an interval calculation. Warwick Tuckerused interval arithmetic in order to solve the 14th ofSmale's problems, that is, to show that theLorenz attractoris astrange attractor.[8]Thomas Halesused interval arithmetic in order to solve theKepler conjecture. Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example,Archimedescalculated lower and upper bounds 223/71 <π< 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten. Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young.[9]Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra byPaul S. Dwyer[de];[10]intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958).[11] The birth of modern interval arithmetic was marked by the appearance of the bookInterval AnalysisbyRamon E. Moorein 1966.[12][13]He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic.[14]Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding. Independently in 1956,Mieczyslaw Warmussuggested formulae for calculations with intervals,[15]though Moore found the first non-trivial applications. In the following twenty years, German groups of researchers carried out pioneering work aroundUlrich W. Kulisch[16][17]andGötz Alefeld[de][18]at theUniversity of Karlsruheand later also at theBergische University of Wuppertal. For example,Karl Nickel[de]explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s,Eldon R. Hansendealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm.[5]Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developedbranch and boundmethods, which until then had only applied to integer values, by using intervals to provide applications for continuous values. In 1988, Rudolf Lohner developedFortran-based software for reliable solutions for initial value problems usingordinary differential equations.[19] The journalReliable Computing(originallyInterval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic.[20] In recent years work has concentrated in particular on the estimation ofpreimagesof parameterized functions and to robust control theory by the COPRIN working group ofINRIAinSophia Antipolisin France.[21] There are many software packages that permit the development of numerical applications using interval arithmetic.[22]These are usually provided in the form of program libraries. There are alsoC++and Fortrancompilersthat handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly. Since 1967,Extensions for Scientific Computation(XSC) have been developed in theUniversity of Karlsruhefor variousprogramming languages, such as C++, Fortran, andPascal.[23]The first platform was aZuseZ23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976,Pascal-SC, a Pascal variant on aZilog Z80that it made possible to create fast, complicated routines for automated result verification. Then came theFortran 77-based ACRITH-XSC for theSystem/370architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code forCcompilers withPascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under theGNU General Public License. At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at theBergische University of Wuppertalto correspond to the improved C++ standard. Another C++-class library was created in 1993 at theHamburg University of TechnologycalledProfil/BIAS(Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals. TheBoost collectionof C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.[24] TheFrinkprogramming language has an implementation of interval arithmetic that handlesarbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation. GAOL[25]is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in intervalconstraint programming. The Moore library[26]is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on theconceptsfeature of C++. TheJuliaprogramming language[27]has an implementation of interval arithmetics along with high-level features, such asroot-finding(for both real and complex-valued functions) and intervalconstraint programming, via the ValidatedNumerics.jl package.[28] In addition, computer algebra systems, such asEuler Mathematical Toolbox,FriCAS,Maple,Mathematica,Maxima[29]andMuPAD, can handle intervals. AMatlabextensionIntlab[30]builds onBLASroutines, and the toolbox b4m makes a Profil/BIAS interface.[30][31] A library for the functional languageOCamlwas written in assembly language and C.[32] MPFI is a library for arbitrary precision interval arithmetic; it is written in C and is based onMPFR.[33] A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015.[34]Two reference implementations are freely available.[35]These have been developed by members of the standard's working group: The libieeep1788[36]library for C++, and the interval package[37]forGNU Octave. A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations.[38] Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).
https://en.wikipedia.org/wiki/Interval_arithmetic
Constrained Application Protocol(CoAP) is a specializedUDP-basedInternet application protocol for constrained devices, as defined inRFC 7252(published in 2014). It enables those constrained devices called "nodes" to communicate with the wider Internet using similar protocols. CoAP is designed for use between devices on the same constrained network (e.g., low-power, lossy networks), between devices and general nodes on the Internet, and between devices on different constrained networks both joined by an internet. CoAP is also being used via other mechanisms, such as SMS on mobile communication networks. CoAP is an application-layer protocol that is intended for use in resource-constrained Internet devices, such aswireless sensor networknodes. CoAP is designed to easily translate toHTTPfor simplified integration with the web, while also meeting specialized requirements such asmulticastsupport, very low overhead, and simplicity.[1][2]Multicast, low overhead, and simplicity are important forInternet of things(IoT) andmachine-to-machine(M2M) communication, which tend to beembeddedand have much less memory and power supply than traditional Internet devices have. Therefore, efficiency is very important. CoAP can run on most devices that support UDP or a UDP analogue. The Internet Engineering Task Force (IETF) ConstrainedRESTfulEnvironments Working Group (CoRE) has done the major standardization work for this protocol. In order to make the protocol suitable to IoT and M2M applications, various new functions have been added. The core of the protocol is specified inRFC7252. Various extensions have been proposed, particularly: CoAP makes use of two message types, requests and responses, using a simple, binary header format. CoAP is by default bound toUDPand optionally toDTLS, providing a high level of communications security. When bound to UDP, the entire messagemustfit within a single datagram. When used with6LoWPANas defined in RFC 4944, messagesshouldfit into a singleIEEE 802.15.4frame to minimize fragmentation. The smallest CoAP message is 4 bytes in length, if the token, options and payload fields are omitted, i.e. if it only consists of the CoAP header. The header is followed by the token value (0 to 8 bytes) which may be followed by a list of options in an optimized type–length–value format. Any bytes after the header, token and options (if any) are considered the message payload, which is prefixed by the one-byte "payload marker" (0xFF). The length of the payload is implied by the datagram length. The first 4 bytes are mandatory in all CoAP datagrams, they constitute the fixed-size header. These fields can be extracted from these 4 bytes in C via these macros: The three most significant bits form a number known as the "class", which is analogous to theclass of HTTP status codes. The five least significant bits form a code that communicates further detail about the request or response. The entire code is typically communicated in the formclass.code. You can find the latest CoAP request/response codes at[1], though the below list gives some examples: Every request carries a token (but it may be zero length) whose value was generated by the client. The server must echo every token value without any modification back to the client in the corresponding response. It is intended for use as a client-local identifier to match requests and responses, especially for concurrent requests. Matching requests and responses is not done with the message ID because a response may be sent in a different message than the acknowledgement (which uses the message ID for matching). For example, this could be done to prevent retransmissions if obtaining the result takes some time. Such a detached response is called "separate response". In contrast, transmitting the response directly in the acknowledgement is called "piggybacked response" which is expected to be preferred for efficiency reasons. Option delta: Option length: Option value: RFC 7641, RFC 7959 docs.rs/coap/ There existproxyimplementations which provideforwardorreverseproxy functionality for the CoAP protocol and also implementations which translate between protocols like HTTP and CoAP. The following projects provide proxy functionality: In many CoAP application domains it is essential to have the ability to address several CoAP resources as a group, instead of addressing each resource individually (e.g. to turn on all the CoAP-enabled lights in a room with a single CoAP request triggered by toggling the light switch). To address this need, the IETF has developed an optional extension for CoAP in the form of an experimental RFC: Group Communication for CoAP - RFC 7390[3]This extension relies on IP multicast to deliver the CoAP request to all group members. The use of multicast has certain benefits such as reducing the number of packets needed to deliver the request to the members. However, multicast also has its limitations such as poor reliability and being cache-unfriendly. An alternative method for CoAP group communication that uses unicasts instead of multicasts relies on having an intermediary where the groups are created. Clients send their group requests to the intermediary, which in turn sends individual unicast requests to the group members, collects the replies from them, and sends back an aggregated reply to the client.[4] CoAP defines four security modes:[5] Research has been conducted on optimizing DTLS by implementing security associates as CoAP resources rather than using DTLS as a security wrapper for CoAP traffic. This research has indicated that improvements of up to 6.5 times none optimized implementations.[6] In addition to DTLS, RFC8613[7]defines the Object Security for Constrained RESTful Environments (OSCORE) protocol which provides security for CoAP at the application layer. Although the protocol standard includes provisions for mitigating the threat ofDDoSamplification attacks,[8]these provisions are not implemented in practice,[9]resulting in the presence of over 580,000 targets primarily located in China and attacks up to 320 Gbit/s.[10]
https://en.wikipedia.org/wiki/Constrained_Application_Protocol
Incomputing,text-based user interfaces(TUI) (alternatelyterminal user interfaces, to reflect a dependence upon the properties ofcomputer terminalsand not just text), is aretronymdescribing a type ofuser interface(UI) common as an early form ofhuman–computer interaction, before the advent of bitmapped displays and modern conventionalgraphical user interfaces(GUIs). Like modern GUIs, they can use the entirescreenarea and may acceptmouseand other inputs. They may also use color and often structure the display usingbox-drawing characterssuch as ┌ and ╣. The modern context of use is usually aterminal emulator. Fromtext application's point of view, a text screen (and communications with it) can belong to one of three types (here ordered in order of decreasing accessibility): UnderLinuxand otherUnix-likesystems, a program easilyaccommodatesto any of the three cases because the same interface (namely,standard streams) controls the display and keyboard. Seebelowfor comparison to Windows. ManyTUI programming librariesare available to help developers buildTUI applications. American National Standards Institute(ANSI) standardANSI X3.64defines a standard set ofescape sequencesthat can be used to drive terminals to create TUIs (seeANSI escape code). Escape sequences may be supported for all three cases mentioned in the above section, allowing arbitrarycursormovements and color changes. However, not all terminals follow this standard, and many non-compatible but functionally equivalent sequences exist. OnIBM Personal Computersandcompatibles, the Basic Input Output System (BIOS) andDOSsystem calls provide a way to write text on the screen, and theANSI.SYSdriver could process standard ANSI escape sequences. However, programmers soon learned that writing data directly to thescreen bufferwas far faster and simpler to program, and less error-prone; seeVGA-compatible text modefor details. This change in programming methods resulted in many DOS TUI programs.TheWindows consoleenvironment is notorious for its emulation of certain EGA/VGA text mode features, particularly random access to the text buffer, even if the application runs in a window. On the other hand, programs running under Windows (both native and DOS applications) have much less control of the display and keyboard than Linux and DOS programs can have, because of aforementioned Windows console layer. Most often those programs used a blue background for the main screen, with white or yellow characters, although commonly they had also user color customization. They often usedbox-drawing charactersin IBM'scode page 437. Later, the interface became deeply influenced bygraphical user interfaces(GUI), addingpull-down menus, overlappingwindows,dialog boxesandGUI widgetsoperated bymnemonicsorkeyboard shortcuts. Soonmouseinput was added – either at text resolution as a simple colored box or at graphical resolution thanks to the ability of theEnhanced Graphics Adapter(EGA) andVideo Graphics Array(VGA) display adapters toredefine the text character shapes by software– providing additional functions. Some notable programs of this kind wereMicrosoft Word,DOS Shell,WordPerfect,Norton Commander,Turbo VisionbasedBorlandTurbo PascalandTurbo C(the latter included theconiolibrary),Lotus 1-2-3and many others. Some of these interfaces survived even during theMicrosoftWindows 3.1xperiod in the early 1990s. For example, theMicrosoft C6.0 compiler, used to write true GUI programs under16-bitWindows, still has its own TUI. Since its start,Microsoft Windowsincludes a console to display DOS software. Later versions added theWindows consoleas a native interface forcommand-line interfaceand TUI programs. The console usually opens in window mode, but it can be switched to full, true text mode screen and vice versa by pressing theAltandEnterkeys together. Full-screen mode is not available in Windows Vista and later, but may be used with some workarounds.[1] Windows Terminalis amulti-tabbedterminal emulatorthatMicrosofthas developed forWindows 10and later[2]as a replacement forWindows Console. TheWindows Subsystem for Linuxwhich was added to Windows byMicrosoftin 2019, supports runningLinuxtext-based apps on Windows, withinWindows console,Windows Terminal, and other Windows-based terminals. InUnix-likeoperating systems, TUIs are often constructed using the terminal controllibrarycurses, orncurses(a mostly compatible library), or the alternativeS-Langlibrary. The advent of thecurseslibrary withBerkeley Unixcreated a portable and stable API for which to write TUIs. The ability to talk to varioustext terminaltypes using the sameinterfacesled to more widespread use of "visual" Unix programs, which occupied the entire terminal screen instead of using a simple line interface. This can be seen intext editorssuch asvi,mail clientssuch aspineormutt, system management tools such asSMIT,SAM,FreeBSD'sSysinstallandweb browserssuch aslynx. Some applications, such asw3m, and older versions of pine andviuse the less-abletermcaplibrary, performing many of the functions associated withcurseswithin the application. Custom TUI applications based onwidgetscan be easily developed using thedialogprogram (based onncurses), or theWhiptailprogram (based onS-Lang). In addition, the rise in popularity ofLinuxbrought many former DOS users to a Unix-like platform, which has fostered a DOS influence in many TUIs. The programminicom, for example, is modeled after the popular DOS programTelix. Some other TUI programs, such as theTwindesktop, wereportedover. Most Unix-like operating systems (Linux, FreeBSD, etc.) supportvirtual consoles, typically accessed through a Ctrl-Alt-F key combination. For example, under Linux up to 64 consoles may be accessed (12 via function keys), each displaying in full-screen text mode. Thefree softwareprogramGNU Screenprovides for managing multiple sessions inside a single TUI, and so can be thought of as being like awindow managerfor text-mode and command-line interfaces.Tmuxcan also do this. The proprietarymacOStext editorBBEditincludes ashell worksheetfunction that works as a full-screen shell window. ThefreeEmacstext editor can run a shell inside of one of its buffers to provide similar functionality. There are several shell implementations in Emacs, but onlyansi-termis suitable for running TUI programs. The other common shell modes,shellandeshellonly emulate command lines and TUI programs will complain "Terminal is not fully functional" or display a garbled interface. ThefreeVimandNeovimtext editors have terminal windows (simulatingxterm). The feature is intended for running jobs, parallel builds, or tests, but can also be used (with window splits and tab pages) as a lightweight terminal multiplexer. VAX/VMS (later known asOpenVMS) had a similar facility tocursesknown as the Screen Management facility or SMG. This could be invoked from the command line or called from programs using the SMG$ library.[3] Another kind of TUI is the primary interface of theOberon operating system, first released in 1988 and still maintained. Unlike most other text-based user interfaces, Oberon does not use a text-mode console or terminal, but requires a large bit-mapped display, on which text is the primary target for mouse clicks. Analogous to alinkinhypertext, a command has the formatModule.Procedureparameters~and is activated with a mouse middle-click. Text displayed anywhere on the screen can be edited, and if formatted with the required command syntax, can be middle-clicked and executed. Any text file containing suitably-formatted commands can be used as a so-calledtool text, thus serving as a user-configurable menu. Even the output of a previous command can be edited and used as a new command. This approach is radically different from both conventional dialogue-oriented console menus orcommand-line interfacesbut bears some similarities to the worksheet interface of theMacintosh Programmer's Workshop.[citation needed] Since it does not use graphicalwidgets, only plain text, but offers comparable functionality to aGUIwith atiling window manager, it is referred to as a Text User Interface or TUI. For a short introduction, see the 2nd paragraph on page four of the first publishedReport on the Oberon System.[4] Oberon'sUIinfluenced the design of theAcme text editor and email clientfor thePlan 9 from Bell Labsoperating system. Modernembedded systemsare capable of displaying TUI on a monitor like personal computers. This functionality is usually implemented using specialized integrated circuits, modules, or usingFPGA. Video circuits or modules are usually controlled usingVT100-compatible command set overUART,[citation needed]FPGA designs usually allow direct video memory access.[citation needed]
https://en.wikipedia.org/wiki/Text-based_user_interface
Zigbeeis anIEEE 802.15.4-basedspecificationfor a suite of high-levelcommunication protocolsused to createpersonal area networkswith small, low-powerdigital radios, such as forhome automation, medical device data collection, and other low-power low-bandwidth needs, designed for small scale projects which need wireless connection. Hence, Zigbee is a low-power, low-data-rate, and close proximity (i.e., personal area)wireless ad hoc network. The technology defined by the Zigbee specification is intended to be simpler and less expensive than other wirelesspersonal area networks(WPANs), such asBluetoothor more general wireless networking such asWi-Fi(orLi-Fi). Applications include wireless light switches,home energy monitors, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer. Its low power consumption limits transmission distances to 10–100 meters (33–328 ft)line-of-sight, depending on power output and environmental characteristics.[1]Zigbee devices can transmit data over long distances by passing data through amesh networkof intermediate devices to reach more distant ones. Zigbee is typically used in low data rate applications that require long battery life and secure networking. (Zigbee networks are secured by 128-bitsymmetric encryptionkeys.) Zigbee has a defined rate of up to250kbit/s, best suited for intermittent data transmissions from a sensor or input device. Zigbee was conceived in 1998, standardized in 2003, and revised in 2006. The name refers to thewaggle danceof honey bees after their return to the beehive.[2] Zigbee is a low-powerwireless mesh networkstandard targeted at battery-powered devices in wireless control and monitoring applications. Zigbee delivers low-latency communication. Zigbee chips are typically integrated with radios and withmicrocontrollers. Zigbee operates in the industrial, scientific and medical (ISM) radio bands, with the2.4GHzband being primarily used for lighting and home automation devices in most jurisdictions worldwide. While devices for commercial utility metering and medical device data collection often usesub-GHzfrequencies, (902-928MHzin North America, Australia, and Israel, 868-870 MHz in Europe, 779-787 MHz in China, even those regions and countries still using the 2.4 GHz for most globally sold Zigbee devices meant for home use. With data rates varying from around 20 kbit/s for sub-GHz bands to around 250 kbit/s for channels on the 2.4 GHz band range). Zigbee builds on thephysical layerandmedia access controldefined inIEEE standard 802.15.4for low-rate wireless personal area networks (WPANs). The specification includes four additional key components:network layer,application layer,Zigbee Device Objects(ZDOs) and manufacturer-defined application objects. ZDOs are responsible for some tasks, including keeping track of device roles, managing requests to join a network, and discovering and securing devices. The Zigbee network layer natively supports bothstarandtreenetworks, and genericmesh networking. Every network must have one coordinator device. Within star networks, the coordinator must be the central node. Both trees and meshes allow the use of Zigbeeroutersto extend communication at the network level. Another defining feature of Zigbee is facilities for carrying out secure communications, protecting the establishment and transport of cryptographic keys, ciphering frames, and controlling devices. It builds on the basic security framework defined in IEEE 802.15.4. Zigbee-style self-organizingad hoc digital radio networkswere conceived in the 1990s. The IEEE 802.15.4-2003 Zigbee specification was ratified on December 14, 2004.[3]TheConnectivity Standards Alliance(formerly Zigbee Alliance) announced availability of Specification 1.0 on June 13, 2005, known as theZigBee 2004 Specification. In September 2006, theZigbee 2006 Specificationwas announced, obsoleting the 2004 stack[4]The 2006 specification replaces the message andkey–value pairstructure used in the 2004 stack with acluster library. The library is a set of standardised commands, attributes and global artifacts organised under groups known as clusters with names such as Smart Energy, Home Automation, andZigbee Light Link.[5] In January 2017, Connectivity Standards Alliance renamed the library toDotdotand announced it as a new protocol to be represented by an emoticon (||:).They also announced it will now additionally run over other network types usingInternet Protocol[6]and will interconnect with other standards such asThread.[7]Since its unveiling, Dotdot has functioned as the default application layer for almost all Zigbee devices.[8] Zigbee Pro, also known as Zigbee 2007, was finalized in 2007.[9]A Zigbee Pro device may join and operate on a legacy Zigbee network and vice versa. Due to differences in routing options, a Zigbee Pro device must become a non-routing Zigbee End Device (ZED) on a legacy Zigbee network, and a legacy Zigbee device must become a ZED on a Zigbee Pro network.[10]It operates using the 2.4 GHz ISM band, and adds a sub-GHz band.[11] Zigbee protocols are intended for embedded applications requiringlow power consumptionand tolerating lowdata rates. The resulting network will use very little power—individual devices must have a battery life of at least two years to pass certification.[12][13][dubious–discuss] Typical application areas include: Zigbee is not for situations with high mobility among nodes. Hence, it is not suitable for tactical ad hoc radio networks in the battlefield, where high data rate and high mobility is present and needed.[citation needed][18] The first Zigbee application profile, Home Automation, was announced November 2, 2007.[citation needed]Additional application profiles have since been published. TheZigbee Smart Energy 2.0specifications define anInternet Protocol-basedcommunication protocolto monitor, control, inform, and automate the delivery and use of energy and water. It is an enhancement of the Zigbee Smart Energy version 1 specifications.[19]It adds services forplug-in electric vehiclecharging, installation, configuration and firmware download, prepay services, user information and messaging, load control,demand responseand common information and application profile interfaces for wired and wireless networks. It is being developed by partners including: Zigbee Smart Energy relies on Zigbee IP, a network layer that routes standard IPv6 traffic over IEEE 802.15.4 using6LoWPANheader compression.[20][21] In 2009, the Radio Frequency for Consumer Electronics Consortium (RF4CE) and Connectivity Standards Alliance (formerly Zigbee Alliance) agreed to deliver jointly a standard for radio frequency remote controls. Zigbee RF4CE is designed for a broad range of consumer electronics products, such as TVs and set-top boxes. It promised many advantages over existing remote control solutions, including richer communication and increased reliability, enhanced features and flexibility, interoperability, and no line-of-sight barrier.[22]The Zigbee RF4CE specification uses a subset of Zigbee functionality allowing to run on smaller memory configurations in lower-cost devices, such as remote control of consumer electronics. The radio design used by Zigbee has fewanalogstages and usesdigital circuitswherever possible. Products that integrate the radio and microcontroller into a single module are available.[23] The Zigbee qualification process involves a full validation of the requirements of the physical layer. All radios derived from the same validatedsemiconductor mask setwould enjoy the same RF characteristics. Zigbee radios have very tight constraints on power and bandwidth. An uncertified physical layer that malfunctions can increase the power consumption of other devices on a Zigbee network. Thus, radios are tested with guidance given by Clause 6 of the 802.15.4-2006 Standard.[24] This standard specifies operation in the unlicensed 2.4 to 2.4835 GHz[25](worldwide), 902 to 928 MHz (Americas and Australia) and 868 to 868.6 MHz (Europe)ISM bands. Sixteen channels are allocated in the 2.4 GHz band, spaced 5 MHz apart, though using only 2 MHz of bandwidth each. The radios usedirect-sequence spread spectrumcoding, which is managed by the digital stream into the modulator.Binary phase-shift keying(BPSK) is used in the 868 and 915 MHz bands, andoffset quadrature phase-shift keying(OQPSK) that transmits two bits per symbol is used in the 2.4 GHz band. The raw, over-the-air data rate is 250kbit/sperchannelin the 2.4 GHz band, 40 kbit/s per channel in the 915 MHz band, and 20 kbit/s in the 868 MHz band. The actual data throughput will be less than the maximum specified bit rate because of thepacket overheadand processing delays. For indoor applications at 2.4 GHz transmission distance is 10–20 m, depending on the construction materials, the number of walls to be penetrated and the output power permitted in that geographical location.[26]The output power of the radios is generally 0–20dBm(1–100 mW). There are three classes of Zigbee devices: The current Zigbee protocols support beacon-enabled and non-beacon-enabled networks. In non-beacon-enabled networks, an unslottedCSMA/CAchannel access mechanism is used. In this type of network, Zigbee routers typically have their receivers continuously active, requiring additional power.[29]However, this allows for heterogeneous networks in which some devices receive continuously while others transmit when necessary. The typical example of a heterogeneous network is awireless light switch: The Zigbee node at the lamp may constantly receive since it is reliably powered by the mains supply to the lamp, while a battery-powered light switch would remain asleep until the switch is thrown. In this case, the switch wakes up, sends a command to the lamp, receives an acknowledgment, and returns to sleep. In such a network the lamp node will be at least a Zigbee router, if not the Zigbee coordinator; the switch node is typically a Zigbee end device. In beacon-enabled networks, Zigbee routers transmit periodic beacons to confirm their presence to other network nodes. Nodes may sleep between beacons, thus extending their battery life. Beacon intervals depend on data rate; they may range from 15.36 milliseconds to 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 393.216 seconds at 40 kbit/s and from 48 milliseconds to 786.432 seconds at 20 kbit/s. Long beacon intervals require precise timing, which can be expensive to implement in low-cost products. In general, the Zigbee protocols minimize the time the radio is on, so as to reduce power use. In beaconing networks, nodes only need to be active while a beacon is being transmitted. In non-beacon-enabled networks, power consumption is decidedly asymmetrical: Some devices are always active while others spend most of their time sleeping. Except for Smart Energy Profile 2.0, Zigbee devices are required to conform to the IEEE 802.15.4-2003 Low-rate Wireless Personal Area Network (LR-WPAN) standard. The standard specifies the lowerprotocol layers—thephysical layer(PHY), and themedia access controlportion of thedata link layer. The basic channel access mode iscarrier-sense multiple access with collision avoidance(CSMA/CA). That is, the nodes communicate in a way somewhat analogous to how humans converse: a node briefly checks to see that other nodes are not talking before it starts. CSMA/CA is not used in three notable exceptions: The main functions of thenetwork layerare to ensure correct use of theMAC sublayerand provide a suitable interface for use by the next upper layer, namely the application layer. The network layer deals with network functions such as connecting, disconnecting, and setting up networks. It can establish a network, allocate addresses, and add and remove devices. This layer makes use of star, mesh and tree topologies. The data entity of the transport layer creates and managesprotocol data unitsat the direction of the application layer and performs routing according to the current topology. The control entity handles the configuration of new devices and establishes new networks. It can determine whether a neighboring device belongs to the network and discovers new neighbors and routers. The routing protocol used by the network layer isAODV.[30]To find a destination device, AODV is used to broadcast a route request to all of its neighbors. The neighbors then broadcast the request to their neighbors and onward until the destination is reached. Once the destination is reached, a route reply is sent via unicast transmission following the lowest cost path back to the source. Once the source receives the reply, it updates its routing table with the destination address of the next hop in the path and the associated path cost. The application layer is the highest-level layer defined by the specification and is the effective interface of the Zigbee system to its end users. It comprises the majority of components added by the Zigbee specification: both ZDO (Zigbee device object) and its management procedures, together with application objects defined by the manufacturer, are considered part of this layer. This layer binds tables, sends messages between bound devices, manages group addresses, reassembles packets, and transports data. It is responsible for providing service to Zigbee device profiles. TheZDO(Zigbee device object), a protocol in the Zigbee protocol stack, is responsible for overall device management, security keys, and policies. It is responsible for defining the role of a device as either coordinator or end device, as mentioned above, but also for the discovery of new devices on the network and the identification of their offered services. It may then go on to establish secure links with external devices and reply to binding requests accordingly. The application support sublayer (APS) is the other main standard component of the stack, and as such it offers a well-defined interface and control services. It works as a bridge between the network layer and the other elements of the application layer: it keeps up-to-datebinding tablesin the form of a database, which can be used to find appropriate devices depending on the services that are needed and those the different devices offer. As the union between both specified layers, it also routes messages across the layers of theprotocol stack. An application may consist of communicating objects which cooperate to carry out the desired tasks. Tasks will typically be largely local to each device, such as the control of each household appliance. The focus of Zigbee is to distribute work among many different devices which reside within individual Zigbee nodes which in turn form a network. The objects that form the network communicate using the facilities provided by APS, supervised by ZDO interfaces. Within a single device, up to 240 application objects can exist, numbered in the range 1–240. 0 is reserved for the ZDO data interface and 255 for broadcast; the 241-254 range is not currently in use but may be in the future. Two services are available for application objects to use (in Zigbee 1.0): Addressing is also part of the application layer. A network node consists of an IEEE 802.15.4-conformant radiotransceiverand one or more device descriptions (collections of attributes that can be polled or set or can be monitored through events). The transceiver is the basis for addressing, and devices within a node are specified by anendpoint identifierin the range 1 to 240. For applications to communicate, the devices that support them must use a common application protocol (types of messages, formats and so on); these sets of conventions are grouped inprofiles. Furthermore, binding is decided upon by matching input and outputcluster identifiers[clarify]unique within the context of a given profile and associated to an incoming or outgoing data flow in a device. Binding tables contain source and destination pairs. Depending on the available information, device discovery may follow different methods. When the network address is known, the IEEE address can be requested usingunicastcommunication. When it is not, petitions arebroadcast. End devices will simply respond with the requested address while a network coordinator or a router will also send the addresses of all the devices associated with it. Thisextended discovery protocol[clarify]permits external devices to find out about devices in a network and the services that they offer, which endpoints can report when queried by the discovering device (which has previously obtained their addresses). Matching services can also be used. The use of cluster identifiers enforces the binding of complementary entities using the binding tables, which are maintained by Zigbee coordinators, as the table must always be available within a network and coordinators are most likely to have a permanent power supply. Backups, managed by higher-level layers, may be needed by some applications. Binding requires an established communication link; after it exists, whether to add a new node to the network is decided, according to the application and security policies. Communication can happen right after the association.Direct addressinguses both radio address and endpoint identifier, whereas indirect addressing uses every relevant field (address, endpoint, cluster, and attribute) and requires that they are sent to the network coordinator, which maintains associations and translates requests for communication. Indirect addressing is particularly useful to keep some devices very simple and minimize their need for storage. Besides these two methods,broadcastto all endpoints in a device is available, andgroup addressingis used to communicate with groups of endpoints belonging to a specified set of devices. As one of its defining features, Zigbee provides facilities for carrying outsecure communications, protecting the establishment and transport ofcryptographic keysand encrypting data. It builds on the basic security framework defined in IEEE 802.15.4. The basic mechanism to ensure confidentiality is the adequate protection of all keying material. Keys are the cornerstone of the security architecture; as such their protection is of paramount importance, and keys are never supposed to be transported through aninsecure channel. A momentary exception to this rule occurs during the initial phase of the addition to the network of a previously unconfigured device. Trust must be assumed in the initial installation of the keys, as well as in the processing of security information. The Zigbee network model must take particular care of security considerations, asad hoc networksmay be physically accessible to external devices. Also, the state of the working environment cannot be predicted. Within the protocol stack, different network layers are not cryptographically separated, so access policies are needed, and conventional design assumed. The open trust model within a device allows for key sharing, which notably decreases potential cost. Nevertheless, the layer which creates a frame is responsible for its security. As malicious devices may exist, every network layer payload must be ciphered, so unauthorized traffic can be immediately cut off. The exception, again, is the transmission of the network key, which confers a unified security layer to the grid, to a new connecting device. The Zigbee security architecture is based on CCM*, which adds encryption- and integrity-only features toCCM mode.[31]Zigbee uses 128-bit keys to implement its security mechanisms. A key can be associated either to a network, being usable by Zigbee layers and the MAC sublayer, or to a link, acquired through pre-installation, agreement or transport. Establishment of link keys is based on a master key which controls link key correspondence. Ultimately, at least, the initial master key must be obtained through a secure medium (transport or pre-installation), as the security of the whole network depends on it. Link and master keys are only visible to the application layer. Different services use differentone-wayvariations of the link key to avoid leaks and security risks. Key distribution is one of the most important security functions of the network. A secure network will designate one special device, thetrust center, which other devices trust for the distribution of security keys. Ideally, devices will have the trust center address and initial master key preloaded; if a momentary vulnerability is allowed, it will be sent as described above. Typical applications without special security needs will use a network key provided by the trust center (through the initially insecure channel) to communicate. Thus, the trust center maintains both the network key and provides point-to-point security. Devices will only accept communications originating from a key supplied by the trust center, except for the initial master key. The security architecture is distributed among the network layers as follows: According to the German computer e-magazineHeise Online, Zigbee Home Automation 1.2 uses fallback keys for encryption negotiation which are known and cannot be changed. This makes the encryption highly vulnerable.[32][33]The Zigbee 3.0 standard features improved security features and mitigates the aforementioned weakness by giving device manufacturers the option of using a custom installation key that is then shipped together with the device, thereby preventing the network traffic from ever using the fallback key altogether. This ensures that all network traffic is securely encrypted even while pairing the device. In addition, all Zigbee devices need to randomize their network key, no matter which pairing method they use, thereby improving security for older devices. The Zigbee coordinator within the Zigbee network can be set to deny access to devices that do not employ this key randomization, further increasing security. In addition, the Zigbee 3.0 protocol features countermeasures against removing already paired devices from the network with the intention of listening to the key exchange when re-pairing. Network simulators, likens-2,OMNeT++,OPNET, andNetSimcan be used to simulate IEEE 802.15.4 Zigbee networks. These simulators come with open sourceCorC++librariesfor users to modify. This way users can determine the validity of new algorithms before hardware implementation.
https://en.wikipedia.org/wiki/Zigbee
Incomputing,floating-point arithmetic(FP) isarithmeticon subsets ofreal numbersformed by asignificand(asignedsequence of a fixed number of digits in somebase) multiplied by aninteger powerof that base. Numbers of this form are calledfloating-point numbers.[1]: 3[2]: 10 For example, the number 2469/200 is a floating-point number in base ten with five digits:2469/200=12.345=12345⏟significand×10⏟base−3⏞exponent{\displaystyle 2469/200=12.345=\!\underbrace {12345} _{\text{significand}}\!\times \!\underbrace {10} _{\text{base}}\!\!\!\!\!\!\!\overbrace {{}^{-3}} ^{\text{exponent}}}However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits. The nearest floating-point number with only five digits is 12.346. And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits. In practice, most floating-point systems usebase two, though base ten (decimal floating point) is also common. Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations byroundingany result that is not a floating-point number itself to a nearby floating-point number.[1]: 22[2]: 10For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345. The termfloating pointrefers to the fact that the number'sradix pointcan "float" anywhere to the left, right, or between thesignificant digitsof the number. This position is indicated by the exponent, so floating point can be considered a form ofscientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of very differentorders of magnitude— such as the number of metersbetween galaxiesorbetween protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of thisdynamic rangeis that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.[3] Over the years, a variety of floating-point representations have been used in computers. In 1985, theIEEE 754Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE. The speed of floating-point operations, commonly measured in terms ofFLOPS, is an important characteristic of acomputer system, especially for applications that involve intensive mathematical calculations. Afloating-point unit(FPU, colloquially a mathcoprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers. Anumber representationspecifies some way of encoding a number, usually as a string of digits. There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of theradix pointis indicated by placing an explicit"point" character(dot or comma) there. If the radix point is not specified, then the string implicitly represents anintegerand the unstated radix point would be off the right-hand end of the string, next to the least significant digit. Infixed-pointsystems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. Inscientific notation, the given number is scaled by apower of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period ofJupiter's moonIois152,853.5047seconds, a value that would be represented in standard-form scientific notation as1.528535047×105seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: To derive the value of the floating-point number, thesignificandis multiplied by thebaseraised to the power of theexponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 (the familiardecimalnotation) as an example, the number152,853.5047, which has ten decimal digits of precision, is represented as the significand1,528,535,047together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105to give1.528535047×105, or152,853.5047. In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred. Symbolically, this final value is:sbp−1×be,{\displaystyle {\frac {s}{b^{\,p-1}}}\times b^{e},} wheresis the significand (ignoring any implied decimal point),pis the precision (the number of digits in the significand),bis the base (in our example, this is the numberten), andeis the exponent. Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point[4][5][nb 3]), base eight (octal floating point[1][5][6][4][nb 4]), base four (quaternary floating point[7][5][nb 5]), base three (balanced ternary floating point[1]) and even base 256[5][nb 6]and base65,536.[8][nb 7] A floating-point number is arational number, because it can be represented as one integer divided by another; for example1.45×103is (145/100)×1000 or145,000/100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (0.2, or2×10−1). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but inbase 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occurdepend on the base and its prime factors. The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation,p=24{\displaystyle p=24}, and so the significand is a string of 24bits. For instance, the numberπ's first 33 bits are:110010010000111111011010_101000100.{\displaystyle 11001001\ 00001111\ 1101101{\underline {0}}\ 10100010\ 0.} In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit0above. The next bit, at position 24, is called theround bitorrounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there arespecific rules for halfway values, which is not the case here). This bit, which is1in this example, is added to the integer formed by the leftmost 24 bits, yielding:110010010000111111011011_.{\displaystyle 11001001\ 00001111\ 1101101{\underline {1}}.} When this is stored in memory using the IEEE 754 encoding, this becomes thesignificands. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:(∑n=0p−1bitn×2−n)×2e=(1×2−0+1×2−1+0×2−2+0×2−3+1×2−4+⋯+1×2−23)×21≈1.57079637×2≈3.1415927{\displaystyle {\begin{aligned}&\left(\sum _{n=0}^{p-1}{\text{bit}}_{n}\times 2^{-n}\right)\times 2^{e}\\={}&\left(1\times 2^{-0}+1\times 2^{-1}+0\times 2^{-2}+0\times 2^{-3}+1\times 2^{-4}+\cdots +1\times 2^{-23}\right)\times 2^{1}\\\approx {}&1.57079637\times 2\\\approx {}&3.1415927\end{aligned}}} wherepis the precision (24in this example),nis the position of the bit of the significand from the left (starting at0and finishing at23here) andeis the exponent (1in this example). It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is callednormalization. For binary formats (which uses only the digits0and1), this non-zero digit is necessarily1. Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called theleading bit convention, theimplicit bit convention, thehidden bit convention,[1]or theassumed bit convention. The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives: In 1914, the Spanish engineerLeonardo Torres QuevedopublishedEssays on Automatics,[9]where he designed a special-purpose electromechanical calculator based onCharles Babbage'sanalytical engineand described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format asn× 10m{\displaystyle ^{m}}, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "nwill always be the same number ofdigits(e.g. six), the first digit ofnwill be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form:n;m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through atypewriter, as was the case of hisElectromechanical Arithmometerin 1920.[10][11][12] In 1938,Konrad Zuseof Berlin completed theZ1, the first binary, programmablemechanical computer;[13]it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit.[14]The more reliablerelay-basedZ3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as1/∞=0{\displaystyle ^{1}/_{\infty }=0}, and it stops on undefined operations, such as0×∞{\displaystyle 0\times \infty }. Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes±∞{\displaystyle \pm \infty }and NaN representations, anticipating features of the IEEE Standard by four decades.[15]In contrast,von Neumannrecommended against floating-point numbers for the 1951IAS machine, arguing that fixed-point arithmetic is preferable.[15] The firstcommercialcomputer with floating-point hardware was Zuse'sZ4computer, designed in 1942–1945. In 1946, Bell Laboratories introduced theModel V, which implementeddecimal floating-point numbers.[16] ThePilot ACEhas binary floating-point arithmetic, and it became operational in 1950 atNational Physical Laboratory, UK. Thirty-three were later sold commercially as theEnglish Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-producedIBM 704followed in 1954; it introduced the use of abiased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see alsoExtensions for Scientific Computation(XSC)). It was not until the launch of the Intel i486 in 1989 thatgeneral-purposepersonal computers had floating-point capability in hardware as a standard feature. TheUNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations: TheIBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introducedhexadecimal floating-point representationsin itsSystem/360mainframes; these same representations are still available for use in modernz/Architecturesystems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic. Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of theIEEE 754standard once the 32-bit (or 64-bit)wordhad become commonplace. This standard was significantly based on a proposal from Intel, which was designing thei8087numerical coprocessor; Motorola, which was designing the68000around the same time, gave significant input as well. In 1989, mathematician and computer scientistWilliam Kahanwas honored with theTuring Awardfor being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor,Harold Stone.[17] Among the x86 innovations are these: A floating-point number consists of twofixed-pointcomponents, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number. On a typical computer system, adouble-precision(64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210= 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022≈ 2 × 10−308to approximately 21024≈ 2 × 10308. The number of normal floating-point numbers in a system (B,P,L,U) where is2(B−1)(BP−1)(U−L+1){\displaystyle 2\left(B-1\right)\left(B^{P-1}\right)\left(U-L+1\right)}. There is a smallest positive normal floating-point number, which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent. There is a largest floating-point number, which hasB− 1 as the value for each digit of the significand and the largest possible value for the exponent. In addition, there are representable values strictly between −UFL and UFL. Namely,positive and negative zeros, as well assubnormal numbers. TheIEEEstandardized the computer representation for binary floating-point numbers inIEEE 754(a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It wasrevised in 2008. IBM mainframes supportIBM's own hexadecimal floating point formatand IEEE 754-2008decimal floating pointin addition to the IEEE 754 binary format. TheCray T90series had an IEEE version, but theSV1still uses Cray floating-point format.[citation needed] The standard provides for many closely related formats, differing in only a few details. Five of these formats are calledbasic formats, and others are termedextended precision formatsandextendable precision format. Three formats are especially widely used in computer hardware and languages:[citation needed] Increasing the precision of the floating-point representation generally reduces the amount of accumulatedround-off errorcaused by intermediate calculations.[24]Other IEEE formats include: Any integer with absolute value less than 224can be exactly represented in the single-precision format, and any integer with absolute value less than 253can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers. The standard specifies some special values, and their representation: positiveinfinity(+∞), negative infinity (−∞), anegative zero(−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs). Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than+∞and strictly greater than−∞, and they are ordered in the same way as their values (in the set of real numbers). Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and a field for the significand, from left to right. For theIEEE 754binary formats (basic and extended) that have extant hardware implementations, they are apportioned as follows: While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros andsubnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs. In the IEEE binary interchange formats the leading bit of a normalized significand is not actually stored in the computer datum, since it is always 1. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, quad has 113, and octuple has 237. For example, it was shown above that π, rounded to 24 bits of precision, has: The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as An example of a layout for32-bit floating pointis and the64-bit ("double")layout is similar. In addition to the widely usedIEEE 754standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas. By their nature, all numbers expressed in floating-point format arerational numberswith a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such asπor2{\textstyle {\sqrt {2}}}, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101or 12345679 × 101), the same applies tonon-terminating digits(.5to be rounded to either .55555555 or .55555556). When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called therounded value. Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly: where, as previously,sis the significand andeis the exponent. When rounded to 24 bits this becomes which is actually 0.100000001490116119384765625 in decimal. As a further example, the real numberπ, represented in binary as an infinite sequence of bits is but is when approximated byroundingto a precision of 24 bits. In binary single-precision floating-point, this is represented ass= 1.10010010000111111011011 withe= 1. This has a decimal value of whereas a more accurate approximation of the true value of π is The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is thediscretization errorand is limited by themachine epsilon. The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called aunit in the last place(ULP). For example, if there is no representable number lying between the representable numbers 1.45A70C2216and 1.45A70C2416, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23or about 10−7in single precision, and exactly 2−53or about 10−16in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP. Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requirescorrect rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several differentroundingschemes (orrounding modes). Historically,truncationwas the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result.[nb 8]In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.) Alternative rounding options are also available. IEEE 754 specifies the following rounding modes: Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, andinterval arithmetic. The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.[34] Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include: Many modern language runtimes use Grisu3 with a Dragon4 fallback.[41] The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c).[35]Further work has likewise progressed in the direction of faster parsing.[42] For ease of presentation and understanding, decimalradixwith 7 digit precision will be used in the examples, as in the IEEE 754decimal32format. The fundamental principles are the same in anyradixor precision, except that normalization is optional (it does not affect the numerical value of the result). Here,sdenotes the significand andedenotes the exponent. A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method: In detail: This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is The lowest three digits of the second operand (654) are essentially lost. This isround-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them: In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only aguardbit, aroundingbit and one extrastickybit need to be carried beyond the precision of the operands.[43][44]: 218–220 Another problem of loss of significance occurs whenapproximationsto two nearly equal numbers are subtracted. In the following examplee= 5;s= 1.234571 ande= 5;s= 1.234567 are approximations to the rationals 123457.1467 and 123456.659. The floating-point difference is computed exactly because the numbers are close—theSterbenz lemmaguarantees this, even in case of underflow whengradual underflowis supported. Despite this, the difference of the original numbers ise= −1;s= 4.877000, which differs more than 20% from the differencee= −1;s= 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost.[43][45]Thiscancellationillustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic innumerical analysis; see alsoAccuracy problems. To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized. Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand. There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession.[43]In practice, the way these operations are carried out in digital logic can be quite complex (seeBooth's multiplication algorithmandDivision algorithm).[nb 9] Literals for floating-point numbers depend on languages. They typically useeorEto denotescientific notation. TheC programming languageand theIEEE 754standard also define ahexadecimal literal syntaxwith a base-2 exponent instead of 10. In languages likeC, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such asJavaScript), or allow overloading of numeric types (such asHaskell). In these cases, digit strings such as123may also be floating-point literals. Examples of floating-point literals are: Floating-point computation in a computer can run into three kinds of problems: Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind oftrapthat the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were notportable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.) Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored). The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g.,C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by ameansoutside of the standard (e.g.C11specifies that the flags havethread-local storage). IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"): The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes.inexactreturns a correctly rounded result, andunderflowreturns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored.[46]divide-by-zeroreturns infinity exactly, which will typically then divide a finite number and so give zero, or else will give aninvalidexception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given byRtot=1/(1/R1+1/R2+⋯+1/Rn){\displaystyle R_{\text{tot}}=1/(1/R_{1}+1/R_{2}+\cdots +1/R_{n})}. If a short-circuit develops withR1{\displaystyle R_{1}}set to 0,1/R1{\displaystyle 1/R_{1}}will return +infinity which will give a finalRtot{\displaystyle R_{tot}}of 0, as expected[47](see the continued fraction example ofIEEE 754 design rationalefor another example). Overflowandinvalidexceptions can typically not be ignored, but do not necessarily represent errors: for example, aroot-findingroutine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and aninvalidexception flag to be ignored until finding a useful start point.[46] The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finiteprecisionwith which computers generally represent numbers. For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary ase= −4;s= 110011001100110011001101, which is Squaring this number gives Squaring it with rounding to the 24-bit precision gives But the representable number closest to 0.01 is Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C: will give a result of 16331239353195370.0. In single precision (using thetanffunction), the result will be −22877332.0. By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10−15in double precision, or −0.8742×10−7in single precision.[nb 10] While floating-point addition and multiplication are bothcommutative(a+b=b+aanda×b=b×a), they are not necessarilyassociative. That is,(a+b) +cis not necessarily equal toa+ (b+c). Using 7-digit significand decimal arithmetic: They are also not necessarilydistributive. That is,(a+b) ×cmay not be the same asa×c+b×c: In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur: Q(h)=f(a+h)−f(a)h.{\displaystyle Q(h)={\frac {f(a+h)-f(a)}{h}}.} Machine precisionis a quantity that characterizes the accuracy of a floating-point system, and is used inbackward error analysisof floating-point algorithms. It is also known as unit roundoff ormachine epsilon. Usually denotedΕmach, its value depends on the particular rounding being used. With rounding to zero,Emach=B1−P,{\displaystyle \mathrm {E} _{\text{mach}}=B^{1-P},\,}whereas rounding to nearest,Emach=12B1−P,{\displaystyle \mathrm {E} _{\text{mach}}={\tfrac {1}{2}}B^{1-P},}whereBis the base of the system andPis the precision of the significand (in baseB). This is important since it bounds therelative errorin representing any non-zero real numberxwithin the normalized range of a floating-point system:|fl⁡(x)−xx|≤Emach.{\displaystyle \left|{\frac {\operatorname {fl} (x)-x}{x}}\right|\leq \mathrm {E} _{\text{mach}}.} Backward error analysis, the theory of which was developed and popularized byJames H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.[52]The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined asbackward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, thecondition numberof a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.[53] As a trivial example, consider a simple expression giving the inner product of (length two) vectorsx{\displaystyle x}andy{\displaystyle y}, thenfl⁡(x⋅y)=fl⁡(fl⁡(x1⋅y1)+fl⁡(x2⋅y2)),wherefl⁡()indicates correctly rounded floating-point arithmetic=fl⁡((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2)),whereδn≤Emach,from above=((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2))(1+δ3)=(x1⋅y1)(1+δ1)(1+δ3)+(x2⋅y2)(1+δ2)(1+δ3),{\displaystyle {\begin{aligned}\operatorname {fl} (x\cdot y)&=\operatorname {fl} {\big (}\operatorname {fl} (x_{1}\cdot y_{1})+\operatorname {fl} (x_{2}\cdot y_{2}){\big )},&&{\text{ where }}\operatorname {fl} (){\text{ indicates correctly rounded floating-point arithmetic}}\\&=\operatorname {fl} {\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )},&&{\text{ where }}\delta _{n}\leq \mathrm {E} _{\text{mach}},{\text{ from above}}\\&={\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )}(1+\delta _{3})\\&=(x_{1}\cdot y_{1})(1+\delta _{1})(1+\delta _{3})+(x_{2}\cdot y_{2})(1+\delta _{2})(1+\delta _{3}),\end{aligned}}}and sofl⁡(x⋅y)=x^⋅y^,{\displaystyle \operatorname {fl} (x\cdot y)={\hat {x}}\cdot {\hat {y}},} where x^1=x1(1+δ1);x^2=x2(1+δ2);y^1=y1(1+δ3);y^2=y2(1+δ3),{\displaystyle {\begin{aligned}{\hat {x}}_{1}&=x_{1}(1+\delta _{1});&{\hat {x}}_{2}&=x_{2}(1+\delta _{2});\\{\hat {y}}_{1}&=y_{1}(1+\delta _{3});&{\hat {y}}_{2}&=y_{2}(1+\delta _{3}),\\\end{aligned}}} where δn≤Emach{\displaystyle \delta _{n}\leq \mathrm {E} _{\text{mach}}} by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples innumerical linear algebra, see Higham 2002[54]and other references below. Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half aULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data areill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithmnumerically unstablefor that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known asnumerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires,[55]which can remove, or reduce by orders of magnitude,[56]such risk:IEEE 754 quadruple precisionandextended precisionare designed for this purpose when computing at double precision.[57][nb 11] For example, the following algorithm is a direct implementation to compute the functionA(x) = (x−1) / (exp(x−1) − 1)which is well-conditioned at 1.0,[nb 12]however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.[58] If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] toC99long double), then up to full precision in the final double result can be maintained.[nb 13]Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made: then the algorithm becomes numerically stable and can compute to full double precision. To maintain the properties of such carefully constructed numerically stable programs, careful handling by thecompileris required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article. A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to,[54][59]and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude[59]the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results[60]); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures:[61]notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow. As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of aproton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact.[56][59]An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[62]The "decimal" data type of theC#andPythonprogramming languages, and the decimal formats of theIEEE 754-2008standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal. Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that(x+y)(x−y)=x2−y2{\displaystyle (x+y)(x-y)=x^{2}-y^{2}\,}, and thatsin2⁡θ+cos2⁡θ=1{\displaystyle \sin ^{2}{\theta }+\cos ^{2}{\theta }=1\,}, however these facts cannot be relied on when the quantities involved are the result of floating-point computation. The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like0.6/0.2-3==0will, on most computers, fail to be true[63](in IEEE 754 double precision, for example,0.6/0.2 - 3is approximately equal to−4.44089209850063×10−16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.[54]Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors.[59]It is often better to organize the code in such a way that such tests are unnecessary. For example, incomputational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.[64] Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples arematrix inversion,eigenvectorcomputation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such asiterative refinement, if they are to work well.[65] Summation of a vector of floating-point values is a basic algorithm inscientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. TheKahan summation algorithmmay be used to reduce the errors.[54] Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example,Archimedesapproximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:[citation needed] Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic: While the two forms of the recurrence formula are clearly mathematically equivalent,[nb 14]the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss ofsignificant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. The aforementioned lack ofassociativityof floating-point operations in general means thatcompilerscannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such ascommon subexpression eliminationand auto-vectorization.[66]The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.[67] In some compilers (GCC and Clang), turning on "fast" math may cause the program todisable subnormal floatsat startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as alibrary.[68] In mostFortrancompilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses.[69]Intel Fortran Compileris a notable outlier.[70] A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen inIcing, a verified compiler.[71]
https://en.wikipedia.org/wiki/Floating-point_arithmetic
Incomputational complexity theory,asymptotic computational complexityis the usage ofasymptotic analysisfor the estimation of computational complexity ofalgorithmsandcomputational problems, commonly associated with the usage of thebig O notation. With respect tocomputational resources,asymptotictime complexityandasymptoticspace complexityof computational algorithms and programs are commonly estimated. Other asymptotically estimated behavior includecircuit complexityand various measures ofparallel computation, such as the number of (parallel) processors. Since the ground-breaking 1965 paper byJuris HartmanisandRichard E. Stearns[1]and the 1979 book byMichael GareyandDavid S. JohnsononNP-completeness,[2]the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity. Further, unless specified otherwise, the term "computational complexity" usually refers to theupper boundfor the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g..O(n3).{\displaystyle O(n^{3}).}Other types of (asymptotic) computational complexity estimates arelower bounds("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(nlogn)). A furthertacit assumptionis that theworst case analysisof computational complexity is in question unless stated otherwise. An alternative approach isprobabilistic analysis of algorithms. In most practical casesdeterministic algorithmsorrandomized algorithmsare discussed, althoughtheoretical computer sciencealso considersnondeterministic algorithmsand other advancedmodels of computation.
https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
Attention spanis the amount oftimespentconcentratingon a task before becomingdistracted.[1]Distractibility occurs when attention is uncontrollably diverted to another activity or sensation.[2]Attention trainingis said to be part ofeducation, particularly in the way students are trained to remain focused on a topic of discussion for extended periods, developing listening and analytical skills in the process.[3] Measuring humans’ estimated attention span depends on what the attention is being used for. The terms “transient attention” and “selective sustained attention” are used to separate short term and focused attention. Transient attention is a short-term response to a stimulus that temporarily attracts or distracts attention. Researchers disagree on the exact amount of the human transient attention span, whereas selective sustained attention, also known as focused attention, is the level of attention that produces consistent results on a task over time. Common estimates of the attention span of healthy teenagers and adults range 5 hours. This is possible because people can choose repeatedly to re-focus on the same thing.[4]This ability to renew attention permits people to 'pay attention' to things that last for more than a few minutes, such as lengthy films. Older children are capable of longer periods of attention than younger children.[5] For time-on-task measurements, the type of activity used in the test affects the results, as people are generally capable of a longer attention span when they are doing something that they find enjoyable orintrinsically motivating.[4]Attention is also increased if the person is able to perform the task fluently, compared to a person who has difficulty performing the task, or to the same person when they are just learning the task.Fatigue,hunger,noise, andemotional stressreduce the time focused on the task. A research study that consisted of 10,430 males and females ages 10 to 70 observed sustained attention time across a lifespan. The study required participants to use a cognitive testing website where data was gathered for seven months. The data collected from the study concluded that attention span is not one singular linear equation; at age 15 it is recorded that attention-span-related abilities diverge. Over the course of the study, collected evidence additionally found that, in humans, attention span is at its highest level when a person is in their early 40s, then gradually declines in old age.[6] Many different tests on attention span have been used in different populations and in different times. Some tests measure short-term, focused attention abilities (which is typically below normal in people withADHD), and others provide information about how easily distracted the test-taker is (typically a significant problem in people with ADHD). Tests like the DeGangi'sTest of Attention in Infants(TAI) andWechsler Intelligence Scale for Children-IV (WISC-IV) are commonly used to assess attention-related issues in young children when interviews and observations are inadequate.[7]Older tests, like theContinuous Performance Testand thePorteus Maze Test, have been rejected by some experts.[7]These tests are typically criticized[by whom?]as not actually measuring attention, being inappropriate for some populations, or not providing clinically useful information. Variability in test scores can be produced by small changes in the testing environment.[7]For example, test-takers will usually remain on task for longer periods of time if the examiner is visibly present in the room than if the examiner is absent. In an early study of the influence of temperament on attention span, the mothers of 232 pairs of twins were interviewed periodically about the similarities and differences in behavior displayed by their twins during infancy and early childhood. The results showed that each of the behavioral variables (temper frequency, temper intensity, irritability, crying, and demanding attention) had a significant inverse relationship with attention span. In other words, the twin with longer attention span was better able to remain performing a particular activity without distraction, and was also the less temperamental twin.[8] One study of 2600 children found that early exposure to television (around age two) is associated with later attention problems such as inattention, impulsiveness, disorganization, and distractibility at age seven.[9][10]Thiscorrelationalstudy does not specify whether viewing television increases attention problems in children, or if children who are naturally prone to inattention are disproportionately attracted to the stimulation of television at young ages, or if there is some other factor, such as parenting skills, associated with this finding. Another study examining the relations between children’s attention span-persistence in preschool and later academic achievements found that children’s age four attention span-persistence significantly predicted math and reading achievement at age 21 after controlling for achievement levels at age seven, adopted status, child vocabulary skills, gender, and maternal education level. For instance, children who enrolled in formal schooling without the ability to pay attention, remember instructions, and demonstrate self-control have more difficulty in elementary school and throughout high school.[11] In another study involving 10,000 children (ages eight to 11), fluctuations in attention span were observed during the school day, with higher levels of attention in the afternoon and lower levels in the morning. The study also found that student awareness and productivity increased after a two-day weekend but substantially decreased after summer break.[12] The rise of short-form videos has been on an exponential rise, with platforms such as TikTok, Instagram, and Facebook Reels having the attention of everyday individuals. These platforms have given new information on the way the public consumes media and the effect it has on attention span. A study was done in 2024 that found that students who consistently watch short-term videos struggle with memory-based academic work. The study was done by researchers collecting data using a survey and a digital attention test to study how social media would affect their habits, the way they use social media, and how their grades are affected by it. The survey asked about the daily usage, the student GPA, and their usual concentration struggles. The students averaging around 3 hours of screen time and with a 2.8 GPA, had a significantly shorter attention span, with heavy users having slower reaction times and being more prone to making errors in their academic life Due to the nature of short-form videos, the brain of the students got used to the constant stimulation of the videos and quick content switches.[13]In conclusion, this article shows evidence of the damage of short-form videos and the correlation between short-form videos and undergraduate students' academic performance. Platforms that offer such content are designed for the focus to keep the consumer engaged with the content, with a very accurate algorithm that tailors to your content preferences. Studies that have been made on such technology report that the different social media layouts, which are matrix, masonry, and linear, have varying effects. Matrix layouts have an impact on the consumer’s attention span by increasing attention but reducing focus duration. In contrast, linear layouts enhance sustained attention but limit scope, and masonry layouts offer a middle ground of the three layouts.[14]These layouts influence the visual attention quality (VAQ), which measures how these designs maintain the user focus and engagement compared to fragmented viewing. These experiments illustrate how certain media might affect users’ attention span by the type of layout users are exposed to. Another study was done through a validated questionnaire. While it does not show major effect on one's attention span, it creates a useful tool to extract information for future research, as it proved itself to be useful when questioning patients.[15] Although the research done on social media has shown to decrease attention span, not all forms of media have the same impact on the public. For example, videogames don’t stray too far from short-term videos. Studies were made to test different types of video game genres and the impact on people that play them. They made a group of four: action games, sports simulator games, RPG games, and those who don't play games. What they gathered from their research was that the people didn’t differ too much in attention span, but they concluded that the people that played games those who didn’t showed some variation.[16]The studies found that more hours spent playing action games correlated with better visual attention, as they had better coordinator when playing such genres with those playing sport simulators having similar results.
https://en.wikipedia.org/wiki/Attention_span
Instatisticsandsignal processing, aminimum mean square error(MMSE) estimator is an estimation method which minimizes themean square error(MSE), which is a common measure of estimator quality, of the fitted values of adependent variable. In theBayesiansetting, the term MMSE more specifically refers to estimation with quadraticloss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as theWiener–Kolmogorov filterandKalman filter. The term MMSE more specifically refers to estimation in aBayesiansetting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available; or the statistics of an actual random signal such as speech. This is in contrast to the non-Bayesian approach likeminimum-variance unbiased estimator(MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly onBayes' theorem, it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a parameter that is itself arandom variable. Furthermore, Bayesian estimation can also deal with situations where the sequence of observations are not necessarily independent. Thus Bayesian estimation provides yet another alternative to the MVUE. This is useful when the MVUE does not exist or cannot be found. Letx{\displaystyle x}be an×1{\displaystyle n\times 1}hidden random vector variable, and lety{\displaystyle y}be am×1{\displaystyle m\times 1}known random vector variable (the measurement or observation), both of them not necessarily of the same dimension. Anestimatorx^(y){\displaystyle {\hat {x}}(y)}ofx{\displaystyle x}is any function of the measurementy{\displaystyle y}. The estimation error vector is given bye=x^−x{\displaystyle e={\hat {x}}-x}and itsmean squared error(MSE) is given by thetraceof errorcovariance matrix where theexpectationE{\displaystyle \operatorname {E} }is taken overx{\displaystyle x}conditioned ony{\displaystyle y}. Whenx{\displaystyle x}is a scalar variable, the MSE expression simplifies toE⁡{(x^−x)2}{\displaystyle \operatorname {E} \left\{({\hat {x}}-x)^{2}\right\}}. Note that MSE can equivalently be defined in other ways, since The MMSE estimator is then defined as the estimator achieving minimal MSE: In many cases, it is not possible to determine the analytical expression of the MMSE estimator. Two basic numerical approaches to obtain the MMSE estimate depends on either finding the conditional expectationE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}or finding the minima of MSE. Direct numerical evaluation of the conditional expectation is computationally expensive since it often requires multidimensional integration usually done viaMonte Carlo methods. Another computational approach is to directly seek the minima of the MSE using techniques such as thestochastic gradient descent methods; but this method still requires the evaluation of expectation. While these numerical methods have been fruitful, a closed form expression for the MMSE estimator is nevertheless possible if we are willing to make some compromises. One possibility is to abandon the full optimality requirements and seek a technique minimizing the MSE within a particular class of estimators, such as the class of linear estimators. Thus, we postulate that the conditional expectation ofx{\displaystyle x}giveny{\displaystyle y}is a simple linear function ofy{\displaystyle y},E⁡{x∣y}=Wy+b{\displaystyle \operatorname {E} \{x\mid y\}=Wy+b}, where the measurementy{\displaystyle y}is a random vector,W{\displaystyle W}is a matrix andb{\displaystyle b}is a vector. This can be seen as the first order Taylor approximation ofE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}. The linear MMSE estimator is the estimator achieving minimum MSE among all estimators of such form. That is, it solves the following optimization problem: One advantage of such linear MMSE estimator is that it is not necessary to explicitly calculate theposterior probabilitydensity function ofx{\displaystyle x}. Such linear estimator only depends on the first two moments ofx{\displaystyle x}andy{\displaystyle y}. So although it may be convenient to assume thatx{\displaystyle x}andy{\displaystyle y}are jointly Gaussian, it is not necessary to make this assumption, so long as the assumed distribution has well defined first and second moments. The form of the linear estimator does not depend on the type of the assumed underlying distribution. The expression for optimalb{\displaystyle b}andW{\displaystyle W}is given by: wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}},y¯=E⁡{y},{\displaystyle {\bar {y}}=\operatorname {E} \{y\},}theCXY{\displaystyle C_{XY}}is cross-covariance matrix betweenx{\displaystyle x}andy{\displaystyle y}, theCY{\displaystyle C_{Y}}is auto-covariance matrix ofy{\displaystyle y}. Thus, the expression for linear MMSE estimator, its mean, and its auto-covariance is given by where theCYX{\displaystyle C_{YX}}is cross-covariance matrix betweeny{\displaystyle y}andx{\displaystyle x}. Lastly, the error covariance and minimum mean square error achievable by such estimator is Let us have the optimal linear MMSE estimator given asx^=Wy+b{\displaystyle {\hat {x}}=Wy+b}, where we are required to find the expression forW{\displaystyle W}andb{\displaystyle b}. It is required that the MMSE estimator be unbiased. This means, Plugging the expression forx^{\displaystyle {\hat {x}}}in above, we get wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}}andy¯=E⁡{y}{\displaystyle {\bar {y}}=\operatorname {E} \{y\}}. Thus we can re-write the estimator as and the expression for estimation error becomes From the orthogonality principle, we can haveE⁡{(x^−x)(y−y¯)T}=0{\displaystyle \operatorname {E} \{({\hat {x}}-x)(y-{\bar {y}})^{T}\}=0}, where we takeg(y)=y−y¯{\displaystyle g(y)=y-{\bar {y}}}. Here the left-hand-side term is When equated to zero, we obtain the desired expression forW{\displaystyle W}as TheCXY{\displaystyle C_{XY}}is cross-covariance matrix between X and Y, andCY{\displaystyle C_{Y}}is auto-covariance matrix of Y. SinceCXY=CYXT{\displaystyle C_{XY}=C_{YX}^{T}}, the expression can also be re-written in terms ofCYX{\displaystyle C_{YX}}as Thus the full expression for the linear MMSE estimator is Since the estimatex^{\displaystyle {\hat {x}}}is itself a random variable withE⁡{x^}=x¯{\displaystyle \operatorname {E} \{{\hat {x}}\}={\bar {x}}}, we can also obtain its auto-covariance as Putting the expression forW{\displaystyle W}andWT{\displaystyle W^{T}}, we get Lastly, the covariance of linear MMSE estimation error will then be given by The first term in the third line is zero due to the orthogonality principle. SinceW=CXYCY−1{\displaystyle W=C_{XY}C_{Y}^{-1}}, we can re-writeCe{\displaystyle C_{e}}in terms of covariance matrices as This we can recognize to be the same asCe=CX−CX^.{\displaystyle C_{e}=C_{X}-C_{\hat {X}}.}Thus the minimum mean square error achievable by such a linear estimator is For the special case when bothx{\displaystyle x}andy{\displaystyle y}are scalars, the above relations simplify to whereρ=σXYσXσY{\displaystyle \rho ={\frac {\sigma _{XY}}{\sigma _{X}\sigma _{Y}}}}is thePearson's correlation coefficientbetweenx{\displaystyle x}andy{\displaystyle y}. The above two equations allows us to interpret the correlation coefficient either as normalized slope of linear regression or as square root of the ratio of two variances Whenρ=0{\displaystyle \rho =0}, we havex^=x¯{\displaystyle {\hat {x}}={\bar {x}}}andσe2=σX2{\displaystyle \sigma _{e}^{2}=\sigma _{X}^{2}}. In this case, no new information is gleaned from the measurement which can decrease the uncertainty inx{\displaystyle x}. On the other hand, whenρ=±1{\displaystyle \rho =\pm 1}, we havex^=σXYσY(y−y¯)+x¯{\displaystyle {\hat {x}}={\frac {\sigma _{XY}}{\sigma _{Y}}}(y-{\bar {y}})+{\bar {x}}}andσe2=0{\displaystyle \sigma _{e}^{2}=0}. Herex{\displaystyle x}is completely determined byy{\displaystyle y}, as given by the equation of straight line. Standard method likeGauss eliminationcan be used to solve the matrix equation forW{\displaystyle W}. A more numerically stable method is provided byQR decompositionmethod. Since the matrixCY{\displaystyle C_{Y}}is a symmetric positive definite matrix,W{\displaystyle W}can be solved twice as fast with theCholesky decomposition, while for large sparse systemsconjugate gradient methodis more effective.Levinson recursionis a fast method whenCY{\displaystyle C_{Y}}is also aToeplitz matrix. This can happen wheny{\displaystyle y}is awide sense stationaryprocess. In such stationary cases, these estimators are also referred to asWiener–Kolmogorov filters. Let us further model the underlying process of observation as a linear process:y=Ax+z{\displaystyle y=Ax+z}, whereA{\displaystyle A}is a known matrix andz{\displaystyle z}is random noise vector with the meanE⁡{z}=0{\displaystyle \operatorname {E} \{z\}=0}and cross-covarianceCXZ=0{\displaystyle C_{XZ}=0}. Here the required mean and the covariance matrices will be Thus the expression for the linear MMSE estimator matrixW{\displaystyle W}further modifies to Putting everything into the expression forx^{\displaystyle {\hat {x}}}, we get Lastly, the error covariance is The significant difference between the estimation problem treated above and those ofleast squaresandGauss–Markovestimate is that the number of observationsm, (i.e. the dimension ofy{\displaystyle y}) need not be at least as large as the number of unknowns,n, (i.e. the dimension ofx{\displaystyle x}). The estimate for the linear observation process exists so long as them-by-mmatrix(ACXAT+CZ)−1{\displaystyle (AC_{X}A^{T}+C_{Z})^{-1}}exists; this is the case for anymif, for instance,CZ{\displaystyle C_{Z}}is positive definite. Physically the reason for this property is that sincex{\displaystyle x}is now a random variable, it is possible to form a meaningful estimate (namely its mean) even with no measurements. Every new measurement simply provides additional information which may modify our original estimate. Another feature of this estimate is that form<n, there need be no measurement error. Thus, we may haveCZ=0{\displaystyle C_{Z}=0}, because as long asACXAT{\displaystyle AC_{X}A^{T}}is positive definite, the estimate still exists. Lastly, this technique can handle cases where the noise is correlated. An alternative form of expression can be obtained by using the matrix identity which can be established by post-multiplying by(ACXAT+CZ){\displaystyle (AC_{X}A^{T}+C_{Z})}and pre-multiplying by(ATCZ−1A+CX−1),{\displaystyle (A^{T}C_{Z}^{-1}A+C_{X}^{-1}),}to obtain and SinceW{\displaystyle W}can now be written in terms ofCe{\displaystyle C_{e}}asW=CeATCZ−1{\displaystyle W=C_{e}A^{T}C_{Z}^{-1}}, we get a simplified expression forx^{\displaystyle {\hat {x}}}as In this form the above expression can be easily compared withridge regression,weighed least squareandGauss–Markov estimate. In particular, whenCX−1=0{\displaystyle C_{X}^{-1}=0}, corresponding to infinite variance of the apriori information concerningx{\displaystyle x}, the resultW=(ATCZ−1A)−1ATCZ−1{\displaystyle W=(A^{T}C_{Z}^{-1}A)^{-1}A^{T}C_{Z}^{-1}}is identical to the weighed linear least square estimate withCZ−1{\displaystyle C_{Z}^{-1}}as the weight matrix. Moreover, if the components ofz{\displaystyle z}are uncorrelated and have equal variance such thatCZ=σ2I,{\displaystyle C_{Z}=\sigma ^{2}I,}whereI{\displaystyle I}is an identity matrix, thenW=(ATA)−1AT{\displaystyle W=(A^{T}A)^{-1}A^{T}}is identical to the ordinary least square estimate. When apriori information is available asCX−1=λI{\displaystyle C_{X}^{-1}=\lambda I}and thez{\displaystyle z}are uncorrelated and have equal variance, we haveW=(ATA+λI)−1AT{\displaystyle W=(A^{T}A+\lambda I)^{-1}A^{T}}, which is identical to ridge regression solution. In many real-time applications, observational data is not available in a single batch. Instead the observations are made in a sequence. One possible approach is to use the sequential observations to update an old estimate as additional data becomes available, leading to finer estimates. One crucial difference between batch estimation and sequential estimation is that sequential estimation requires an additional Markov assumption. In the Bayesian framework, such recursive estimation is easily facilitated using Bayes' rule. Givenk{\displaystyle k}observations,y1,…,yk{\displaystyle y_{1},\ldots ,y_{k}}, Bayes' rule gives us the posterior density ofxk{\displaystyle x_{k}}as Thep(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}is called the posterior density,p(yk|xk){\displaystyle p(y_{k}|x_{k})}is called the likelihood function, andp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}is the prior density ofk-th time step. Here we have assumed the conditional independence ofyk{\displaystyle y_{k}}from previous observationsy1,…,yk−1{\displaystyle y_{1},\ldots ,y_{k-1}}givenx{\displaystyle x}as This is the Markov assumption. The MMSE estimatex^k{\displaystyle {\hat {x}}_{k}}given thek-th observation is then the mean of the posterior densityp(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}. With the lack of dynamical information on how the statex{\displaystyle x}changes with time, we will make a further stationarity assumption about the prior: Thus, the prior density fork-th time step is the posterior density of (k-1)-th time step. This structure allows us to formulate a recursive approach to estimation. In the context of linear MMSE estimator, the formula for the estimate will have the same form as before:x^=CXYCY−1(y−y¯)+x¯.{\displaystyle {\hat {x}}=C_{XY}C_{Y}^{-1}(y-{\bar {y}})+{\bar {x}}.}However, the mean and covariance matrices ofX{\displaystyle X}andY{\displaystyle Y}will need to be replaced by those of the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}and likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}, respectively. For the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}, its mean is given by the previous MMSE estimate, and its covariance matrix is given by the previous error covariance matrix, as per by the properties of MMSE estimators and the stationarity assumption. Similarly, for the linear observation process, the mean of the likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}is given byy¯k=Ax¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\bar {x}}_{k}=A{\hat {x}}_{k-1}}and the covariance matrix is as before The difference between the predicted value ofYk{\displaystyle Y_{k}}, as given byy¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\hat {x}}_{k-1}}, and its observed valueyk{\displaystyle y_{k}}gives the prediction errory~k=yk−y¯k{\displaystyle {\tilde {y}}_{k}=y_{k}-{\bar {y}}_{k}}, which is also referred to as innovation or residual. It is more convenient to represent the linear MMSE in terms of the prediction error, whose mean and covariance areE[y~k]=0{\displaystyle \mathrm {E} [{\tilde {y}}_{k}]=0}andCY~k=CYk|Xk{\displaystyle C_{{\tilde {Y}}_{k}}=C_{Y_{k}|X_{k}}}. Hence, in the estimate update formula, we should replacex¯{\displaystyle {\bar {x}}}andCX{\displaystyle C_{X}}byx^k−1{\displaystyle {\hat {x}}_{k-1}}andCek−1{\displaystyle C_{e_{k-1}}}, respectively. Also, we should replacey¯{\displaystyle {\bar {y}}}andCY{\displaystyle C_{Y}}byy¯k−1{\displaystyle {\bar {y}}_{k-1}}andCY~k{\displaystyle C_{{\tilde {Y}}_{k}}}. Lastly, we replaceCXY{\displaystyle C_{XY}}by Thus, we have the new estimate as new observationyk{\displaystyle y_{k}}arrives as and the new error covariance as From the point of view of linear algebra, for sequential estimation, if we have an estimatex^1{\displaystyle {\hat {x}}_{1}}based on measurements generating spaceY1{\displaystyle Y_{1}}, then after receiving another set of measurements, we should subtract out from these measurements that part that could be anticipated from the result of the first measurements. In other words, the updating must be based on that part of the new data which is orthogonal to the old data. The repeated use of the above two equations as more observations become available lead to recursive estimation techniques. The expressions can be more compactly written as The matrixWk{\displaystyle W_{k}}is often referred to as the Kalman gain factor. The alternative formulation of the above algorithm will give The repetition of these three steps as more data becomes available leads to an iterative estimation algorithm. The generalization of this idea to non-stationary cases gives rise to theKalman filter. The three update steps outlined above indeed form the update step of the Kalman filter. As an important special case, an easy to use recursive expression can be derived when at eachk-th time instant the underlying linear observation process yields a scalar such thatyk=akTxk+zk{\displaystyle y_{k}=a_{k}^{T}x_{k}+z_{k}}, whereak{\displaystyle a_{k}}isn-by-1 known column vector whose values can change with time,xk{\displaystyle x_{k}}isn-by-1 random column vector to be estimated, andzk{\displaystyle z_{k}}is scalar noise term with varianceσk2{\displaystyle \sigma _{k}^{2}}. After (k+1)-th observation, the direct use of above recursive equations give the expression for the estimatex^k+1{\displaystyle {\hat {x}}_{k+1}}as: whereyk+1{\displaystyle y_{k+1}}is the new scalar observation and the gain factorwk+1{\displaystyle w_{k+1}}isn-by-1 column vector given by TheCek+1{\displaystyle C_{e_{k+1}}}isn-by-nerror covariance matrix given by Here, no matrix inversion is required. Also, the gain factor,wk+1{\displaystyle w_{k+1}}, depends on our confidence in the new data sample, as measured by the noise variance, versus that in the previous data. The initial values ofx^{\displaystyle {\hat {x}}}andCe{\displaystyle C_{e}}are taken to be the mean and covariance of the aprior probability density function ofx{\displaystyle x}. Alternative approaches:This important special case has also given rise to many other iterative methods (oradaptive filters), such as theleast mean squares filterandrecursive least squares filter, that directly solves the original MSE optimization problem usingstochastic gradient descents. However, since the estimation errore{\displaystyle e}cannot be directly observed, these methods try to minimize the mean squared prediction errorE{y~Ty~}{\displaystyle \mathrm {E} \{{\tilde {y}}^{T}{\tilde {y}}\}}. For instance, in the case of scalar observations, we have the gradient∇x^E{y~2}=−2E{y~a}.{\displaystyle \nabla _{\hat {x}}\mathrm {E} \{{\tilde {y}}^{2}\}=-2\mathrm {E} \{{\tilde {y}}a\}.}Thus, the update equation for the least mean square filter is given by whereηk{\displaystyle \eta _{k}}is the scalar step size and the expectation is approximated by the instantaneous valueE{aky~k}≈aky~k{\displaystyle \mathrm {E} \{a_{k}{\tilde {y}}_{k}\}\approx a_{k}{\tilde {y}}_{k}}. As we can see, these methods bypass the need for covariance matrices. In many practical applications, the observation noise is uncorrelated. That is,CZ{\displaystyle C_{Z}}is a diagonal matrix. In such cases, it is advantageous to consider the components ofy{\displaystyle y}as independent scalar measurements, rather than vector measurement. This allows us to reduce computation time by processing them×1{\displaystyle m\times 1}measurement vector asm{\displaystyle m}scalar measurements. The use of scalar update formula avoids matrix inversion in the implementation of the covariance update equations, thus improving the numerical robustness against roundoff errors. The update can be implemented iteratively as: whereℓ=1,2,…,m{\displaystyle \ell =1,2,\ldots ,m}, using the initial valuesCek+1(0)=Cek{\displaystyle C_{e_{k+1}}^{(0)}=C_{e_{k}}}andx^k+1(0)=x^k{\displaystyle {\hat {x}}_{k+1}^{(0)}={\hat {x}}_{k}}. The intermediate variablesCZk+1(ℓ){\displaystyle C_{Z_{k+1}}^{(\ell )}}is theℓ{\displaystyle \ell }-th diagonal element of them×m{\displaystyle m\times m}diagonal matrixCZk+1{\displaystyle C_{Z_{k+1}}}; whileAk+1(ℓ){\displaystyle A_{k+1}^{(\ell )}}is theℓ{\displaystyle \ell }-th row ofm×n{\displaystyle m\times n}matrixAk+1{\displaystyle A_{k+1}}. The final values areCek+1(m)=Cek+1{\displaystyle C_{e_{k+1}}^{(m)}=C_{e_{k+1}}}andx^k+1(m)=x^k+1{\displaystyle {\hat {x}}_{k+1}^{(m)}={\hat {x}}_{k+1}}. We shall take alinear predictionproblem as an example. Let a linear combination of observed scalar random variablesz1,z2{\displaystyle z_{1},z_{2}}andz3{\displaystyle z_{3}}be used to estimate another future scalar random variablez4{\displaystyle z_{4}}such thatz^4=∑i=13wizi{\displaystyle {\hat {z}}_{4}=\sum _{i=1}^{3}w_{i}z_{i}}. If the random variablesz=[z1,z2,z3,z4]T{\displaystyle z=[z_{1},z_{2},z_{3},z_{4}]^{T}}are real Gaussian random variables with zero mean and its covariance matrix given by then our task is to find the coefficientswi{\displaystyle w_{i}}such that it will yield an optimal linear estimatez^4{\displaystyle {\hat {z}}_{4}}. In terms of the terminology developed in the previous sections, for this problem we have the observation vectory=[z1,z2,z3]T{\displaystyle y=[z_{1},z_{2},z_{3}]^{T}}, the estimator matrixW=[w1,w2,w3]{\displaystyle W=[w_{1},w_{2},w_{3}]}as a row vector, and the estimated variablex=z4{\displaystyle x=z_{4}}as a scalar quantity. The autocorrelation matrixCY{\displaystyle C_{Y}}is defined as The cross correlation matrixCYX{\displaystyle C_{YX}}is defined as We now solve the equationCYWT=CYX{\displaystyle C_{Y}W^{T}=C_{YX}}by invertingCY{\displaystyle C_{Y}}and pre-multiplying to get So we havew1=2.57,{\displaystyle w_{1}=2.57,}w2=−0.142,{\displaystyle w_{2}=-0.142,}andw3=.5714{\displaystyle w_{3}=.5714}as the optimal coefficients forz^4{\displaystyle {\hat {z}}_{4}}. Computing the minimum mean square error then gives‖e‖min2=E⁡[z4z4]−WCYX=15−WCYX=.2857{\displaystyle \left\Vert e\right\Vert _{\min }^{2}=\operatorname {E} [z_{4}z_{4}]-WC_{YX}=15-WC_{YX}=.2857}.[2]Note that it is not necessary to obtain an explicit matrix inverse ofCY{\displaystyle C_{Y}}to compute the value ofW{\displaystyle W}. The matrix equation can be solved by well known methods such as Gauss elimination method. A shorter, non-numerical example can be found inorthogonality principle. Consider a vectory{\displaystyle y}formed by takingN{\displaystyle N}observations of a fixed but unknown scalar parameterx{\displaystyle x}disturbed by white Gaussian noise. We can describe the process by a linear equationy=1x+z{\displaystyle y=1x+z}, where1=[1,1,…,1]T{\displaystyle 1=[1,1,\ldots ,1]^{T}}. Depending on context it will be clear if1{\displaystyle 1}represents ascalaror a vector. Suppose that we know[−x0,x0]{\displaystyle [-x_{0},x_{0}]}to be the range within which the value ofx{\displaystyle x}is going to fall in. We can model our uncertainty ofx{\displaystyle x}by an aprioruniform distributionover an interval[−x0,x0]{\displaystyle [-x_{0},x_{0}]}, and thusx{\displaystyle x}will have variance ofσX2=x02/3.{\displaystyle \sigma _{X}^{2}=x_{0}^{2}/3.}. Let the noise vectorz{\displaystyle z}be normally distributed asN(0,σZ2I){\displaystyle N(0,\sigma _{Z}^{2}I)}whereI{\displaystyle I}is an identity matrix. Alsox{\displaystyle x}andz{\displaystyle z}are independent andCXZ=0{\displaystyle C_{XZ}=0}. It is easy to see that Thus, the linear MMSE estimator is given by We can simplify the expression by using the alternative form forW{\displaystyle W}as where fory=[y1,y2,…,yN]T{\displaystyle y=[y_{1},y_{2},\ldots ,y_{N}]^{T}}we havey¯=1TyN=∑i=1NyiN.{\displaystyle {\bar {y}}={\frac {1^{T}y}{N}}={\frac {\sum _{i=1}^{N}y_{i}}{N}}.} Similarly, the variance of the estimator is Thus the MMSE of this linear estimator is For very largeN{\displaystyle N}, we see that the MMSE estimator of a scalar with uniform aprior distribution can be approximated by the arithmetic average of all the observed data while the variance will be unaffected by dataσX^2=σX2,{\displaystyle \sigma _{\hat {X}}^{2}=\sigma _{X}^{2},}and the LMMSE of the estimate will tend to zero. However, the estimator is suboptimal since it is constrained to be linear. Had the random variablex{\displaystyle x}also been Gaussian, then the estimator would have been optimal. Notice, that the form of the estimator will remain unchanged, regardless of the apriori distribution ofx{\displaystyle x}, so long as the mean and variance of these distributions are the same. Consider a variation of the above example: Two candidates are standing for an election. Let the fraction of votes that a candidate will receive on an election day bex∈[0,1].{\displaystyle x\in [0,1].}Thus the fraction of votes the other candidate will receive will be1−x.{\displaystyle 1-x.}We shall takex{\displaystyle x}as a random variable with a uniform prior distribution over[0,1]{\displaystyle [0,1]}so that its mean isx¯=1/2{\displaystyle {\bar {x}}=1/2}and variance isσX2=1/12.{\displaystyle \sigma _{X}^{2}=1/12.}A few weeks before the election, two independent public opinion polls were conducted by two different pollsters. The first poll revealed that the candidate is likely to gety1{\displaystyle y_{1}}fraction of votes. Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an errorz1{\displaystyle z_{1}}with zero mean and varianceσZ12.{\displaystyle \sigma _{Z_{1}}^{2}.}Similarly, the second pollster declares their estimate to bey2{\displaystyle y_{2}}with an errorz2{\displaystyle z_{2}}with zero mean and varianceσZ22.{\displaystyle \sigma _{Z_{2}}^{2}.}Note that except for the mean and variance of the error, the error distribution is unspecified. How should the two polls be combined to obtain the voting prediction for the given candidate? As with previous example, we have Here, both theE⁡{y1}=E⁡{y2}=x¯=1/2{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}={\bar {x}}=1/2}. Thus, we can obtain the LMMSE estimate as the linear combination ofy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}as where the weights are given by Here, since the denominator term is constant, the poll with lower error is given higher weight in order to predict the election outcome. Lastly, the variance ofx^{\displaystyle {\hat {x}}}is given by which makesσX^2{\displaystyle \sigma _{\hat {X}}^{2}}smaller thanσX2.{\displaystyle \sigma _{X}^{2}.}Thus, the LMMSE is given by In general, if we haveN{\displaystyle N}pollsters, thenx^=∑i=1Nwi(yi−x¯)+x¯,{\displaystyle {\hat {x}}=\sum _{i=1}^{N}w_{i}(y_{i}-{\bar {x}})+{\bar {x}},}where the weight fori-th pollster is given bywi=1/σZi2∑j=1N1/σZj2+1/σX2{\displaystyle w_{i}={\frac {1/\sigma _{Z_{i}}^{2}}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}}and the LMMSE is given byLMMSE=1∑j=1N1/σZj2+1/σX2.{\displaystyle \mathrm {LMMSE} ={\frac {1}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}.} Suppose that a musician is playing an instrument and that the sound is received by two microphones, each of them located at two different places. Let the attenuation of sound due to distance at each microphone bea1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}, which are assumed to be known constants. Similarly, let the noise at each microphone bez1{\displaystyle z_{1}}andz2{\displaystyle z_{2}}, each with zero mean and variancesσZ12{\displaystyle \sigma _{Z_{1}}^{2}}andσZ22{\displaystyle \sigma _{Z_{2}}^{2}}respectively. Letx{\displaystyle x}denote the sound produced by the musician, which is a random variable with zero mean and varianceσX2.{\displaystyle \sigma _{X}^{2}.}How should the recorded music from these two microphones be combined, after being synced with each other? We can model the sound received by each microphone as Here both theE⁡{y1}=E⁡{y2}=0{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}=0}. Thus, we can combine the two sounds as where thei-th weight is given as
https://en.wikipedia.org/wiki/Minimum_mean_square_error
Factor analysisis astatisticalmethod used to describevariabilityamong observed, correlatedvariablesin terms of a potentially lower number of unobserved variables calledfactors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobservedlatent variables. The observed variables are modelled aslinear combinationsof the potential factors plus "error" terms, hence factor analysis can be thought of as a special case oferrors-in-variables models.[1] Simply put, the factor loading of a variable quantifies the extent to which the variable is related to a given factor.[2] A common rationale behind factor analytic methods is that the information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Factor analysis is commonly used inpsychometrics,personalitypsychology, biology,marketing,product management,operations research,finance, andmachine learning. It may help to deal with data sets where there are large numbers of observed variables that are thought to reflect a smaller number of underlying/latent variables. It is one of the most commonly used inter-dependency techniques and is used when the relevant set of variables shows a systematic inter-dependence and the objective is to find out the latent factors that create a commonality. The model attempts to explain a set ofp{\displaystyle p}observations in each ofn{\displaystyle n}individuals with a set ofk{\displaystyle k}common factors(fi,j{\displaystyle f_{i,j}}) where there are fewer factors per unit than observations per unit (k<p{\displaystyle k<p}). Each individual hask{\displaystyle k}of their own common factors, and these are related to the observations via the factorloading matrix(L∈Rp×k{\displaystyle L\in \mathbb {R} ^{p\times k}}), for a single observation, according to where In matrix notation where observation matrixX∈Rp×n{\displaystyle X\in \mathbb {R} ^{p\times n}}, loading matrixL∈Rp×k{\displaystyle L\in \mathbb {R} ^{p\times k}}, factor matrixF∈Rk×n{\displaystyle F\in \mathbb {R} ^{k\times n}}, error term matrixε∈Rp×n{\displaystyle \varepsilon \in \mathbb {R} ^{p\times n}}and mean matrixM∈Rp×n{\displaystyle \mathrm {M} \in \mathbb {R} ^{p\times n}}whereby the(i,m){\displaystyle (i,m)}th element is simplyMi,m=μi{\displaystyle \mathrm {M} _{i,m}=\mu _{i}}. Also we will impose the following assumptions onF{\displaystyle F}: SupposeCov(X−M)=Σ{\displaystyle \mathrm {Cov} (X-\mathrm {M} )=\Sigma }. Then and therefore, from conditions 1 and 2 imposed onF{\displaystyle F}above,E[LF]=LE[F]=0{\displaystyle E[LF]=LE[F]=0}andCov(LF+ϵ)=Cov(LF)+Cov(ϵ){\displaystyle Cov(LF+\epsilon )=Cov(LF)+Cov(\epsilon )}, giving or, settingΨ:=Cov(ε){\displaystyle \Psi :=\mathrm {Cov} (\varepsilon )}, For anyorthogonal matrixQ{\displaystyle Q}, if we setL′=LQ{\displaystyle L^{\prime }=\ LQ}andF′=QTF{\displaystyle F^{\prime }=Q^{T}F}, the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is unique only up to anorthogonal transformation. Suppose a psychologist has the hypothesis that there are two kinds ofintelligence, "verbal intelligence" and "mathematical intelligence", neither of which is directly observed.[note 1]Evidencefor the hypothesis is sought in the examination scores from each of 10 different academic fields of 1000 students. If each student is chosen randomly from a largepopulation, then each student's 10 scores are random variables. The psychologist's hypothesis may say that for each of the 10 academic fields, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is someconstanttimes their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a linear combination of those two "factors". The numbers for a particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the hypothesis to be the same for all intelligence level pairs, and are called"factor loading"for this subject.[clarification needed]For example, the hypothesis may hold that the predicted average student's aptitude in the field ofastronomyis The numbers 10 and 6 are the factor loadings associated with astronomy. Other academic subjects may have different factor loadings. Two students assumed to have identical degrees of verbal and mathematical intelligence may have different measured aptitudes in astronomy because individual aptitudes differ from average aptitudes (predicted above) and because of measurement error itself. Such differences make up what is collectively called the "error" — a statistical term that means the amount by which an individual, as measured, differs from what is average for or predicted by his or her levels of intelligence (seeerrors and residuals in statistics). The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data. In the following, matrices will be indicated by indexed variables. "Academic Subject" indices will be indicated using lettersa{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}, with values running from1{\displaystyle 1}top{\displaystyle p}which is equal to10{\displaystyle 10}in the above example. "Factor" indices will be indicated using lettersp{\displaystyle p},q{\displaystyle q}andr{\displaystyle r}, with values running from1{\displaystyle 1}tok{\displaystyle k}which is equal to2{\displaystyle 2}in the above example. "Instance" or "sample" indices will be indicated using lettersi{\displaystyle i},j{\displaystyle j}andk{\displaystyle k}, with values running from1{\displaystyle 1}toN{\displaystyle N}. In the example above, if a sample ofN=1000{\displaystyle N=1000}students participated in thep=10{\displaystyle p=10}exams, thei{\displaystyle i}th student's score for thea{\displaystyle a}th exam is given byxai{\displaystyle x_{ai}}. The purpose of factor analysis is to characterize the correlations between the variablesxa{\displaystyle x_{a}}of which thexai{\displaystyle x_{ai}}are a particular instance, or set of observations. In order for the variables to be on equal footing, they arenormalizedinto standard scoresz{\displaystyle z}: where the sample mean is: and the sample variance is given by: The factor analysis model for this particular sample is then: or, more succinctly: where Inmatrixnotation, we have Observe that by doubling the scale on which "verbal intelligence"—the first component in each column ofF{\displaystyle F}—is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of the factors for verbal intelligence is1{\displaystyle 1}. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors areuncorrelatedwith each other. In other words: whereδpq{\displaystyle \delta _{pq}}is theKronecker delta(0{\displaystyle 0}whenp≠q{\displaystyle p\neq q}and1{\displaystyle 1}whenp=q{\displaystyle p=q}). The errors are assumed to be independent of the factors: Since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument. The values of the loadingsL{\displaystyle L}, the averagesμ{\displaystyle \mu }, and thevariancesof the "errors"ε{\displaystyle \varepsilon }must be estimated given the observed dataX{\displaystyle X}andF{\displaystyle F}(the assumption about the levels of the factors is fixed for a givenF{\displaystyle F}). The "fundamental theorem" may be derived from the above conditions: The term on the left is the(a,b){\displaystyle (a,b)}-term of the correlation matrix (ap×p{\displaystyle p\times p}matrix derived as the product of thep×N{\displaystyle p\times N}matrix of standardized observations with its transpose) of the observed data, and itsp{\displaystyle p}diagonal elements will be1{\displaystyle 1}s. The second term on the right will be a diagonal matrix with terms less than unity. The first term on the right is the "reduced correlation matrix" and will be equal to the correlation matrix except for its diagonal values which will be less than unity. These diagonal elements of the reduced correlation matrix are called "communalities" (which represent the fraction of the variance in the observed variable that is accounted for by the factors): The sample datazai{\displaystyle z_{ai}}will not exactly obey the fundamental equation given above due to sampling errors, inadequacy of the model, etc. The goal of any analysis of the above model is to find the factorsFpi{\displaystyle F_{pi}}and loadingsℓap{\displaystyle \ell _{ap}}which give a "best fit" to the data. In factor analysis, the best fit is defined as the minimum of the mean square error in the off-diagonal residuals of the correlation matrix:[3] This is equivalent to minimizing the off-diagonal components of the error covariance which, in the model equations have expected values of zero. This is to be contrasted with principal component analysis which seeks to minimize the mean square error of all residuals.[3]Before the advent of high-speed computers, considerable effort was devoted to finding approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably by yielding a known reduced correlation matrix. This was then used to estimate the factors and the loadings. With the advent of high-speed computers, the minimization problem can be solved iteratively with adequate speed, and the communalities are calculated in the process, rather than being needed beforehand. TheMinResalgorithm is particularly suited to this problem, but is hardly the only iterative means of finding a solution. If the solution factors are allowed to be correlated (as in 'oblimin' rotation, for example), then the corresponding mathematical model usesskew coordinatesrather than orthogonal coordinates. The parameters and variables of factor analysis can be given a geometrical interpretation. The data (zai{\displaystyle z_{ai}}), the factors (Fpi{\displaystyle F_{pi}}) and the errors (εai{\displaystyle \varepsilon _{ai}}) can be viewed as vectors in anN{\displaystyle N}-dimensional Euclidean space (sample space), represented asza{\displaystyle \mathbf {z} _{a}},Fp{\displaystyle \mathbf {F} _{p}}andεa{\displaystyle {\boldsymbol {\varepsilon }}_{a}}respectively. Since the data are standardized, the data vectors are of unit length (||za||=1{\displaystyle ||\mathbf {z} _{a}||=1}). The factor vectors define ank{\displaystyle k}-dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation and the independence of the factors and the errors:Fp⋅εa=0{\displaystyle \mathbf {F} _{p}\cdot {\boldsymbol {\varepsilon }}_{a}=0}. In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense, so it doesn't matter how the factor vectors which define this hyperplane are chosen, as long as they are independent and lie in the hyperplane. We are free to specify them as both orthogonal and normal (Fp⋅Fq=δpq{\displaystyle \mathbf {F} _{p}\cdot \mathbf {F} _{q}=\delta _{pq}}) with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument. The data vectorsza{\displaystyle \mathbf {z} _{a}}have unit length. The entries of the correlation matrix for the data are given byrab=za⋅zb{\displaystyle r_{ab}=\mathbf {z} _{a}\cdot \mathbf {z} _{b}}. The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectorsza{\displaystyle \mathbf {z} _{a}}andzb{\displaystyle \mathbf {z} _{b}}. The diagonal elements will clearly be1{\displaystyle 1}s and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since thez^a{\displaystyle {\hat {z}}_{a}}are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities": Large values of the communalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The mean values of the factors must also be constrained to be zero, from which it follows that the mean values of the errors will also be zero. Exploratory factor analysis (EFA) is used to identify complex interrelationships among items and group items that are part of unified concepts.[4]The researcher makes noa prioriassumptions about relationships among factors.[4] Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors.[4]CFA usesstructural equation modelingto test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables.[4]Structural equation modeling approaches can accommodate measurement error and are less restrictive thanleast-squares estimation.[4]Hypothesized models are tested against actual data, and the analysis would demonstrate loadings of observed variables on the latent variables (factors), as well as the correlation between the latent variables.[4] Principal component analysis(PCA) is a widely used method for factor extraction, which is the first phase of EFA.[4]Factor weights are computed to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left.[4]The factor model must then be rotated for analysis.[4] Canonical factor analysis, also called Rao's canonical factoring, is a different method of computing the same model as PCA, which uses the principal axis method. Canonical factor analysis seeks factors that have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data. Common factor analysis, also calledprincipal factor analysis(PFA) or principal axis factoring (PAF), seeks the fewest factors which can account for the common variance (correlation) of a set of variables. Image factoring is based on thecorrelation matrixof predicted variables rather than actual variables, where each variable is predicted from the others usingmultiple regression. Alpha factoring is based on maximizing the reliability of factors, assuming variables are randomly sampled from a universe of variables. All other methods assume cases to be sampled and variables fixed. Factor regression model is a combinatorial model of factor model and regression model; or alternatively, it can be viewed as the hybrid factor model,[5]whose factors are partially known. Explained from PCA perspective, not from Factor Analysis perspective. Researchers wish to avoid such subjective or arbitrary criteria for factor retention as "it made sense to me". A number of objective methods have been developed to solve this problem, allowing users to determine an appropriate range of solutions to investigate.[7]However these different methods often disagree with one another as to the number of factors that ought to be retained. For instance, theparallel analysismay suggest 5 factors while Velicer's MAP suggests 6, so the researcher may request both 5 and 6-factor solutions and discuss each in terms of their relation to external data and theory. Horn's parallel analysis(PA):[8]A Monte-Carlo based simulation method that compares the observed eigenvalues with those obtained from uncorrelated normal variables. A factor or component is retained if the associated eigenvalue is bigger than the 95th percentile of the distribution of eigenvalues derived from the random data. PA is among the more commonly recommended rules for determining the number of components to retain,[7][9]but many programs fail to include this option (a notable exception beingR).[10]However,Formannprovided both theoretical and empirical evidence that its application might not be appropriate in many cases since its performance is considerably influenced bysample size,item discrimination, and type ofcorrelation coefficient.[11] Velicer's (1976) MAP test[12]as described by Courtney (2013)[13]“involves a complete principal components analysis followed by the examination of a series of matrices of partial correlations” (p. 397 (though this quote does not occur in Velicer (1976) and the cited page number is outside the pages of the citation). The squared correlation for Step “0” (see Figure 4) is the average squared off-diagonal correlation for the unpartialed correlation matrix. On Step 1, the first principal component and its associated items are partialed out. Thereafter, the average squared off-diagonal correlation for the subsequent correlation matrix is then computed for Step 1. On Step 2, the first two principal components are partialed out and the resultant average squared off-diagonal correlation is again computed. The computations are carried out for k minus one step (k representing the total number of variables in the matrix). Thereafter, all of the average squared correlations for each step are lined up and the step number in the analyses that resulted in the lowest average squared partial correlation determines the number of components or factors to retain.[12]By this method, components are maintained as long as the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance. Although methodologically akin to principal components analysis, the MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies.[7][14][15][16]This procedure is made available through SPSS's user interface,[13]as well as thepsychpackage for theR programming language.[17][18] Kaiser criterion: The Kaiser rule is to drop all components with eigenvalues under 1.0 – this being the eigenvalue equal to the information accounted for by an average single item.[19]The Kaiser criterion is the default inSPSSand moststatistical softwarebut is not recommended when used as the sole cut-off criterion for estimating the number of factors as it tends to over-extract factors.[20]A variation of this method has been created where a researcher calculatesconfidence intervalsfor each eigenvalue and retains only factors which have the entire confidence interval greater than 1.0.[14][21] Scree plot:[22]The Cattell scree test plots the components as the X-axis and the correspondingeigenvaluesas theY-axis. As one moves to the right, toward later components, the eigenvalues drop. When the drop ceases and the curve makes an elbow toward less steep decline, Cattell's scree test says to drop all further components after the one starting at the elbow. This rule is sometimes criticised for being amenable to researcher-controlled "fudging". That is, as picking the "elbow" can be subjective because the curve has multiple elbows or is a smooth curve, the researcher may be tempted to set the cut-off at the number of factors desired by their research agenda.[citation needed] Variance explained criteria: Some researchers simply use the rule of keeping enough factors to account for 90% (sometimes 80%) of the variation. Where the researcher's goal emphasizesparsimony(explaining variance with as few factors as possible), the criterion could be as low as 50%. By placing aprior distributionover the number of latent factors and then applying Bayes' theorem, Bayesian models can return aprobability distributionover the number of latent factors. This has been modeled using theIndian buffet process,[23]but can be modeled more simply by placing any discrete prior (e.g. anegative binomial distribution) on the number of components. The output of PCA maximizes the variance accounted for by the first factor first, then the second factor, etc. A disadvantage of this procedure is that most items load on the early factors, while very few items load on later variables. This makes interpreting the factors by reading through a list of questions and loadings difficult, as every question is strongly correlated with the first few components, while very few questions are strongly correlated with the last few components. Rotation serves to make the output easier to interpret. Bychoosing a different basisfor the same principal components – that is, choosing different factors to express the same correlation structure – it is possible to create variables that are more easily interpretable. Rotations can be orthogonal or oblique; oblique rotations allow the factors to correlate.[24]This increased flexibility means that more rotations are possible, some of which may be better at achieving a specified goal. However, this can also make the factors more difficult to interpret, as some information is "double-counted" and included multiple times in different components; some factors may even appear to be near-duplicates of each other. Two broad classes of orthogonal rotations exist: those that look for sparse rows (where each row is a case, i.e. subject), and those that look for sparse columns (where each column is a variable). It can be difficult to interpret a factor structure when each variable is loading on multiple factors. Small changes in the data can sometimes tip a balance in the factor rotation criterion so that a completely different factor rotation is produced. This can make it difficult to compare the results of different experiments. This problem is illustrated by a comparison of different studies of world-wide cultural differences. Each study has used different measures of cultural variables and produced a differently rotated factor analysis result. The authors of each study believed that they had discovered something new, and invented new names for the factors they found. A later comparison of the studies found that the results were rather similar when the unrotated results were compared. The common practice of factor rotation has obscured the similarity between the results of the different studies.[25] Higher-order factor analysisis a statistical method consisting of repeating steps factor analysis –oblique rotation– factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either bypost-multiplyingthe primaryfactor pattern matrixby the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying aVarimax rotationto the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes thevariationfrom the primary factors to the second-order factors. Factor analysis is related toprincipal component analysis(PCA), but the two are not identical.[26]There has been significant controversy in the field over differences between the two techniques. PCA can be considered as a more basic version ofexploratory factor analysis(EFA) that was developed in the early days prior to the advent of high-speed computers. Both PCA and factor analysis aim to reduce the dimensionality of a set of data, but the approaches taken to do so are different for the two techniques. Factor analysis is clearly designed with the objective to identify certain unobservable factors from the observed variables, whereas PCA does not directly address this objective; at best, PCA provides an approximation to the required factors.[27]From the point of view of exploratory analysis, theeigenvaluesof PCA are inflated component loadings, i.e., contaminated with error variance.[28][29][30][31][32][33] WhilstEFAandPCAare treated as synonymous techniques in some fields of statistics, this has been criticised.[34][35]Factor analysis "deals withthe assumption of an underlying causal structure: [it] assumes that the covariation in the observed variables is due to the presence of one or more latent variables (factors) that exert causal influence on these observed variables".[36]In contrast, PCA neither assumes nor depends on such an underlying causal relationship. Researchers have argued that the distinctions between the two techniques may mean that there are objective benefits for preferring one over the other based on the analytic goal. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results. Factor analysis has been used successfully where adequate understanding of the system permits good initial model formulations. PCA employs a mathematical transformation to the original data with no assumptions about the form of the covariance matrix. The objective of PCA is to determine linear combinations of the original variables and select a few that can be used to summarize the data set without losing much information.[37] Fabrigar et al. (1999)[34]address a number of reasons used to suggest that PCA is not equivalent to factor analysis: Factor analysis takes into account therandom errorthat is inherent in measurement, whereas PCA fails to do so. This point is exemplified by Brown (2009),[38]who indicated that, in respect to the correlation matrices involved in the calculations: "In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables." For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data. The differences between PCA and factor analysis (FA) are further illustrated by Suhr (2009):[35] Charles Spearmanwas the first psychologist to discuss common factor analysis[39]and did so in his 1904 paper.[40]It provided few details about his methods and was concerned with single-factor models.[41]He discovered that school children's scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a single general mental ability, org, underlies and shapes human cognitive performance. The initial development of common factor analysis with multiple factors was given byLouis Thurstonein two papers in the early 1930s,[42][43]summarized in his 1935 book,The Vector of Mind.[44]Thurstone introduced several important factor analysis concepts, including communality, uniqueness, and rotation.[45]He advocated for "simple structure", and developed methods of rotation that could be used as a way to achieve such structure.[39] InQ methodology,William Stephenson, a student of Spearman, distinguish betweenRfactor analysis, oriented toward the study of inter-individual differences, andQfactor analysis oriented toward subjective intra-individual differences.[46][47] Raymond Cattellwas a strong advocate of factor analysis andpsychometricsand used Thurstone's multi-factor theory to explain intelligence. Cattell also developed thescree testand similarity coefficients. Factor analysis is used to identify "factors" that explain a variety of results on different tests. For example, intelligence research found that people who get a high score on a test of verbal ability are also good on other tests that require verbal abilities. Researchers explained this by using factor analysis to isolate one factor, often called verbal intelligence, which represents the degree to which someone is able to solve problems involving verbal skills.[citation needed] Factor analysis in psychology is most often associated with intelligence research. However, it also has been used to find factors in a broad range of domains such as personality, attitudes, beliefs, etc. It is linked topsychometrics, as it can assess the validity of an instrument by finding if the instrument indeed measures the postulated factors.[citation needed] Factor analysis is a frequently used technique in cross-cultural research. It serves the purpose of extractingcultural dimensions. The best known cultural dimensions models are those elaborated byGeert Hofstede,Ronald Inglehart,Christian Welzel,Shalom Schwartzand Michael Minkov. A popular visualization isInglehart and Welzel's cultural map of the world.[25] In an early 1965 study, political systems around the world are examined via factor analysis to construct related theoretical models and research, compare political systems, and create typological categories.[50]For these purposes, in this study seven basic political dimensions are identified, which are related to a wide variety of political behaviour: these dimensions are Access, Differentiation, Consensus, Sectionalism, Legitimation, Interest, and Leadership Theory and Research. Other political scientists explore the measurement of internal political efficacy using four new questions added to the 1988 National Election Study. Factor analysis is here used to find that these items measure a single concept distinct from external efficacy and political trust, and that these four questions provided the best measure of internal political efficacy up to that point in time.[51] The basic steps are: The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product sample or descriptions of product concepts on a range of attributes. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is coded and input into a statistical program such asR,SPSS,SAS,Stata,STATISTICA, JMP, and SYSTAT. The analysis will isolate the underlying factors that explain the data using a matrix of associations.[52]Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because some attributes may be related to each other. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called afactor loading. Factor analysis has also been widely used in physical sciences such asgeochemistry,hydrochemistry,[53]astrophysicsandcosmology, as well as biological sciences, such asecology,molecular biology,neuroscienceandbiochemistry. In groundwater quality management, it is important to relate the spatial distribution of different chemical parameters to different possible sources, which have different chemical signatures. For example, a sulfide mine is likely to be associated with high levels of acidity, dissolved sulfates and transition metals. These signatures can be identified as factors through R-mode factor analysis, and the location of possible sources can be suggested by contouring the factor scores.[54] Ingeochemistry, different factors can correspond to different mineral associations, and thus to mineralisation.[55] Factor analysis can be used for summarizing high-densityoligonucleotideDNA microarraysdata at probe level forAffymetrixGeneChips. In this case, the latent variable corresponds to theRNAconcentration in a sample.[56] Factor analysis has been implemented in several statistical analysis programs since the 1980s:
https://en.wikipedia.org/wiki/Factor_analysis
Digital image processingis the use of adigital computerto processdigital imagesthrough analgorithm.[1][2]As a subcategory or field ofdigital signal processing, digital image processing has many advantages overanalog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up ofnoiseanddistortionduring processing. Since images are defined over two dimensions (perhaps more), digital image processing may be modeled in the form ofmultidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers;[3]second, the development of mathematics (especially the creation and improvement ofdiscrete mathematics theory);[4]and third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.[5] Many of the techniques ofdigital imageprocessing, or digital picture processing as it often was called, were developed in the 1960s, atBell Laboratories, theJet Propulsion Laboratory,Massachusetts Institute of Technology,University of Maryland, and a few other research facilities, with application tosatellite imagery,wire-photostandards conversion,medical imaging,videophone,character recognition, and photograph enhancement.[6]The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the Moon.[7] The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such astelevision standards conversion. Asgeneral-purpose computersbecame faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest. The basis for modernimage sensorsismetal–oxide–semiconductor(MOS) technology,[8]invented at Bell Labs between 1955 and 1960,[9][10][11][12][13][14]This led to the development of digitalsemiconductorimage sensors, including thecharge-coupled device(CCD) and later theCMOS sensor.[8] The charge-coupled device was invented byWillard S. BoyleandGeorge E. Smithat Bell Labs in 1969.[15]While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tinyMOS capacitor. As it was fairly straightforward tofabricatea series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[8]The CCD is a semiconductor circuit that was later used in the firstdigital video camerasfortelevision broadcasting.[16] TheNMOSactive-pixel sensor(APS) was invented byOlympusin Japan during the mid-1980s. This was enabled by advances in MOSsemiconductor device fabrication, withMOSFET scalingreaching smallermicron and then sub-micronlevels.[17][18]The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[19]TheCMOSactive-pixel sensor (CMOS sensor) was later developed byEric Fossum's team at theNASAJet Propulsion Laboratoryin 1993.[20]By 2007, sales of CMOS sensors had surpassed CCD sensors.[21] MOS image sensors are widely used inoptical mousetechnology. The first optical mouse, invented byRichard F. LyonatXeroxin 1980, used a5μmNMOSintegrated circuitsensor chip.[22][23]Since the first commercial optical mouse, theIntelliMouseintroduced in 1999, most optical mouse devices use CMOS sensors.[24][25] An important development in digitalimage compressiontechnology was thediscrete cosine transform(DCT), alossy compressiontechnique first proposed byNasir Ahmedin 1972.[26]DCT compression became the basis forJPEG, which was introduced by theJoint Photographic Experts Groupin 1992.[27]JPEG compresses images down to much smaller file sizes, and has become the most widely usedimage file formaton theInternet.[28]Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation ofdigital imagesanddigital photos,[29]with several billion JPEG images produced every day as of 2015[update].[30] Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression.[31][32]JPEG 2000image compression is used by theDICOMstandard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, calledJPIP, to enable efficient streaming of theJPEG 2000compressed image data.[33] Electronicsignal processingwas revolutionized by the wide adoption ofMOS technologyin the 1970s.[34]MOS integrated circuittechnology was the basis for the first single-chipmicroprocessorsandmicrocontrollersin the early 1970s,[35]and then the first single-chipdigital signal processor(DSP) chips in the late 1970s.[36][37]DSP chips have since been widely used in digital image processing.[36] Thediscrete cosine transform(DCT)image compressionalgorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used forencoding, decoding,video coding,audio coding,multiplexing, control signals,signaling,analog-to-digital conversion, formattingluminanceand color differences, and color formats such asYUV444andYUV411. DCTs are also used for encoding operations such asmotion estimation,motion compensation,inter-frameprediction,quantization, perceptual weighting,entropy encoding, variable encoding, andmotion vectors, and decoding operations such as the inverse operation between different color formats (YIQ,YUVandRGB) for display purposes. DCTs are also commonly used forhigh-definition television(HDTV) encoder/decoder chips.[38] Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analogue means. In particular, digital image processing is a concrete application of, and a practical technology based on: Some techniques which are used in digital image processing include: Digital filters are used to blur and sharpen digital images. Filtering can be performed by: The following examples show both methods:[40] image = checkerboard F = Fourier Transform of image Show Image: log(1+Absolute Value(F)) Images are typically padded before being transformed to the Fourier space, thehighpass filteredimages below illustrate the consequences of different padding techniques: Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding. MATLAB example for spatial domain highpass filtering. Affine transformationsenable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:[40] To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image,[x,y], wherexandyare the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image. However, to allow transformations that require translation transformations, 3-dimensionalhomogeneous coordinatesare needed. The third dimension is usually set to a non-zero constant, usually1, so that the new coordinate is[x,y, 1]. This allows the coordinate vector to be multiplied by a 3×3 matrix, enabling translation shifts. Thus, the third dimension, i.e. the constant1, allows translation. Because matrix multiplication isassociative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector[x,y, 1]in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix. For example, 2-dimensional coordinates only permit rotation about the origin(0, 0). But 3-dimensional homogeneous coordinates can be used to first translate any point to(0, 0), then perform the rotation, and lastly translate the origin(0, 0)back to the original point (the opposite of the first translation). These three affine transformations can be combined into a single matrix—thus allowing rotation around any point in the image.[41] Mathematical morphology(MM) is a nonlinear image processing framework that analyzes shapes within images by probing local pixel neighborhoods using a small, predefined function called astructuring element. In the context of grayscale images, MM is especially useful for denoising throughdilationanderosion—primitive operators that can be combined to build more complex filters. Suppose we have: Here,S{\displaystyle {\mathcal {S}}}defines the neighborhood of relative coordinates(m,n){\displaystyle (m,n)}over which local operations are computed. The values ofB(m,n){\displaystyle B(m,n)}bias the image during dilation and erosion. (f⊕B)(i,j)=max(m,n)∈S{f(i+m,j+n)+B(m,n)}.{\displaystyle (f\oplus B)(i,j)=\max _{(m,n)\in {\mathcal {S}}}{\Bigl \{}f(i+m,j+n)+B(m,n){\Bigr \}}.} (f⊕B)(1,1)=max(f(0,0)+B(−1,−1),45+1;f(1,0)+B(0,−1),50+2;f(2,0)+B(1,−1),65+1;f(0,1)+B(−1,0),40+2;f(1,1)+B(0,0),60+1;f(2,1)+B(1,0),55+1;f(0,2)+B(−1,1),25+1;f(1,2)+B(0,1),15+0;f(2,2)+B(1,1)5+3)=66.{\displaystyle {\begin{aligned}(f\oplus B)(1,1)=\max \!{\Bigl (}&f(0,0)+B(-1,-1),&\;45+1;&\\&f(1,0)+B(0,-1),&\;50+2;&\\&f(2,0)+B(1,-1),&\;65+1;&\\&f(0,1)+B(-1,0),&\;40+2;&\\&f(1,1)+B(0,0),&\;60+1;&\\&f(2,1)+B(1,0),&\;55+1;&\\&f(0,2)+B(-1,1),&\;25+1;&\\&f(1,2)+B(0,1),&\;15+0;&\\&f(2,2)+B(1,1)&\;5+3{\Bigr )}=66.\end{aligned}}} (f⊖B)(i,j)=min(m,n)∈S{f(i+m,j+n)−B(m,n)}.{\displaystyle (f\ominus B)(i,j)=\min _{(m,n)\in {\mathcal {S}}}{\Bigl \{}f(i+m,j+n)-B(m,n){\Bigr \}}.} (f⊖B)(1,1)=min(f(0,0)−B(−1,−1),45−1;f(1,0)−B(0,−1),50−2;f(2,0)−B(1,−1),65−1;f(0,1)−B(−1,0),40−2;f(1,1)−B(0,0),60−1;f(2,1)−B(1,0),55−1;f(0,2)−B(−1,1),25−1;f(1,2)−B(0,1),15−0;f(2,2)−B(1,1)5−3)=2.{\displaystyle {\begin{aligned}(f\ominus B)(1,1)=\min \!{\Bigl (}&f(0,0)-B(-1,-1),&\;45-1;&\\&f(1,0)-B(0,-1),&\;50-2;&\\&f(2,0)-B(1,-1),&\;65-1;&\\&f(0,1)-B(-1,0),&\;40-2;&\\&f(1,1)-B(0,0),&\;60-1;&\\&f(2,1)-B(1,0),&\;55-1;&\\&f(0,2)-B(-1,1),&\;25-1;&\\&f(1,2)-B(0,1),&\;15-0;&\\&f(2,2)-B(1,1)&\;5-3{\Bigr )}=2.\end{aligned}}} After applying dilation tof{\displaystyle f}:[45506540665525155]{\displaystyle {\begin{bmatrix}45&50&65\\40&66&55\\25&15&5\end{bmatrix}}} After applying erosion tof{\displaystyle f}:[4550654025525155]{\displaystyle {\begin{bmatrix}45&50&65\\40&2&55\\25&15&5\end{bmatrix}}} MM operations, such asopeningandclosing, are composite processes that utilize both dilation and erosion to modify the structure of an image. These operations are particularly useful for tasks such as noise removal, shape smoothing, and object separation. For example, applying opening to an imagef{\displaystyle f}with a structuring elementB{\displaystyle B}would first reduce small details (through erosion) and then restore the main shapes (through dilation). This ensures that unwanted noise is removed without significantly altering the size or shape of larger objects. For instance, applying closing to the same imagef{\displaystyle f}would fill in small gaps within objects, such as connecting breaks in thin lines or closing small holes, while ensuring that the surrounding areas are not significantly affected. Both opening and closing can be visualized as ways of refining the structure of an image: opening simplifies and removes small, unnecessary details, while closing consolidates and connects objects to form more cohesive structures. Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from theirimage sensorinto acolor-correctedimage in a standardimage file format. Additional post processing techniques increase edge sharpness or color saturation to create more naturally looking images. Westworld(1973) was the first feature film to use the digital image processing topixellatephotography to simulate an android's point of view.[42]Image processing is also vastly used to produce thechroma keyeffect that replaces the background of actors with natural or artistic scenery. Face detectioncan be implemented withmathematical morphology, thediscrete cosine transform(DCT), and horizontalprojection. General method with feature-based method The feature-based method of face detection is using skin tone, edge detection, face shape, and feature of a face (like eyes, mouth, etc.) to achieve face detection. The skin tone, face shape, and all the unique elements that only the human face have can be described as features. Process explanation Image quality can be influenced by camera vibration, over-exposure, gray level distribution too centralized, and noise, etc. For example, noise problem can be solved bysmoothingmethod while gray level distribution problem can be improved byhistogram equalization. Smoothingmethod In drawing, if there is some dissatisfied color, taking some color around dissatisfied color and averaging them. This is an easy way to think of Smoothing method. Smoothing method can be implemented with mask andconvolution. Take the small image and mask for instance as below. image is[256531461283027322]{\displaystyle {\begin{bmatrix}2&5&6&5\\3&1&4&6\\1&28&30&2\\7&3&2&2\end{bmatrix}}} mask is[1/91/91/91/91/91/91/91/91/9]{\displaystyle {\begin{bmatrix}1/9&1/9&1/9\\1/9&1/9&1/9\\1/9&1/9&1/9\end{bmatrix}}} After convolution and smoothing, image is[25653910619927322]{\displaystyle {\begin{bmatrix}2&5&6&5\\3&9&10&6\\1&9&9&2\\7&3&2&2\end{bmatrix}}} Observing image[1, 1], image[1, 2], image[2, 1], and image[2, 2]. The original image pixel is 1, 4, 28, 30. After smoothing mask, the pixel becomes 9, 10, 9, 9 respectively. new image[1, 1] =19{\displaystyle {\tfrac {1}{9}}}* (image[0,0]+image[0,1]+image[0,2]+image[1,0]+image[1,1]+image[1,2]+image[2,0]+image[2,1]+image[2,2]) new image[1, 1] = floor(19{\displaystyle {\tfrac {1}{9}}}* (2+5+6+3+1+4+1+28+30)) = 9 new image[1, 2] = floor({19{\displaystyle {\tfrac {1}{9}}}* (5+6+5+1+4+6+28+30+2)) = 10 new image[2, 1] = floor(19{\displaystyle {\tfrac {1}{9}}}* (3+1+4+1+28+30+7+3+2)) = 9 new image[2, 2] = floor(19{\displaystyle {\tfrac {1}{9}}}* (1+4+6+28+30+2+3+2+2)) = 9 Gray Level Histogram method Generally, given a gray level histogram from an image as below. Changing the histogram to uniform distribution from an image is usually what we calledhistogram equalization. In discrete time, the area of gray level histogram is∑i=0kH(pi){\displaystyle \sum _{i=0}^{k}H(p_{i})}(see figure 1) while the area of uniform distribution is∑i=0kG(qi){\displaystyle \sum _{i=0}^{k}G(q_{i})}(see figure 2). It is clear that the area will not change, so∑i=0kH(pi)=∑i=0kG(qi){\displaystyle \sum _{i=0}^{k}H(p_{i})=\sum _{i=0}^{k}G(q_{i})}. From the uniform distribution, the probability ofqi{\displaystyle q_{i}}isN2qk−q0{\displaystyle {\tfrac {N^{2}}{q_{k}-q_{0}}}}while the0<i<k{\displaystyle 0<i<k} In continuous time, the equation is∫q0qN2qk−q0ds=∫p0pH(s)ds{\displaystyle \displaystyle \int _{q_{0}}^{q}{\tfrac {N^{2}}{q_{k}-q_{0}}}ds=\displaystyle \int _{p_{0}}^{p}H(s)ds}. Moreover, based on the definition of a function, the Gray level histogram method is like finding a functionf{\displaystyle f}that satisfies f(p)=q. with Matlab, salt & pepper with 0.01 parameter is addedto the original image in order to create a noisy image.
https://en.wikipedia.org/wiki/Image_processing
SECU,Stora Enso Cargo Unit, is a type ofintermodal container(shipping container) built to transport bulk cargo like paper on railway and ship. They were invented and used byStora Enso(forest and paper company). The ports used are mainly in production countries like Finland (Kotka,Oulu) and Sweden (Gothenburg) and in consumer countries Belgium (Zeebrugge), UK (Tilbury,Immingham) and Germany (Lübeck). A SECU looks like a larger standard 40-footISO Container, measuring 13.8×3.6×4.375 m (45.28×11.81×14.35 ft)[1]with a net weight of 80 metric tons (79 long tons; 88 short tons) of cargo. By contrast a 40-foot container is 12.2×2.7×2.4 m (40.0×8.9×7.9 ft) and can carry 26.5 metric tons (26.1 long tons; 29.2 short tons) of cargo. The benefit is that their larger capacity reduces the number of containers needed, and therefore their handling cost. The drawback is that special care is needed to handle them. A SECU is too big and heavy to be transported by road (ISO-Containers are designed to fit roads), and instead they are transported only by railway and ship. A special vehicle or crane is used to load and unload them,[2]and special railcars are also needed. They can be transported on truck ferries, but do not fit normal container ships. The Stora Enso Cargo Unit has fixed legs so that the inner floor has a height of 874 mm (34.4 in) for an unloaded container - the basic container (without legs) has outer dimensions of 3,600 mm (11 ft 10 in) squared.[1][3]
https://en.wikipedia.org/wiki/SECU_(container)
Thenull coalescing operatoris abinary operatorthat is part of the syntax for a basicconditional expressionin severalprogramming languages, such as (in alphabetical order):C#[1]since version 2.0,[2]Dart[3]since version 1.12.0,[4]PHPsince version 7.0.0,[5]Perlsince version 5.10 aslogical defined-or,[6]PowerShellsince 7.0.0,[7]andSwift[8]asnil-coalescing operator. It is most commonly written asx ?? y, but varies across programming languages. While its behavior differs between implementations, the null coalescing operator generally returns the result of its left-most operand if it exists and is notnull, and otherwise returns the right-most operand. This behavior allows a default value to be defined for cases where a more specific value is not available. Like the binaryElvis operator, usually written asx ?: y, the null coalescing operator is ashort-circuiting operatorand thus does not evaluate the second operand if its value is not used, which is significant if its evaluation hasside-effects. InBourne shell(and derivatives), "Ifparameteris unset or null, the expansion ofwordis substituted. Otherwise, the value ofparameteris substituted":[9] InC#, the null coalescing operator is??. It is most often used to simplify expressions as follows: For example, if one wishes to implement some C# code to give a page a default title if none is present, one may use the following statement: instead of the more verbose or The three forms result in the same value being stored into the variable namedpageTitle. suppliedTitleis referenced only once when using the??operator, and twice in the other two code examples. The operator can also be used multiple times in the same expression: Once a non-null value is assigned to number, or it reaches the final value (which may or may not be null), the expression is completed. If, for example, a variable should be changed to another value if its value evaluates to null, since C# 8.0 the??=null coalescing assignment operator can be used: Which is a more concise version of: In combination with thenull-conditional operator?.or the null-conditional element access operator?[]the null coalescing operator can be used to provide a default value if an object or an object's member is null. For example, the following will return the default title if either thepageobject is null orpageis not null but itsTitleproperty is: As ofColdFusion11,[10]Railo4.1,[11]CFMLsupports the null coalescing operator as a variation of the ternary operator,?:. It is functionally and syntactically equivalent to its C# counterpart, above. Example: Missing values inApache FreeMarkerwill normally cause exceptions. However, both missing and null values can be handled, with an optional default value:[12] or, to leave the output blank: JavaScript's nearest operator is??, the "nullish coalescing operator", which was added to the standard inECMAScript's 11th edition.[13]In earlier versions, it could be used via aBabelplugin, and inTypeScript. It evaluates its left-hand operand and, if the result value isnot"nullish" (nullorundefined), takes that value as its result; otherwise, it evaluates the right-hand operand and takes the resulting value as its result. In the following example,awill be assigned the value ofbif the value ofbis notnullorundefined, otherwise it will be assigned 3. Before the nullish coalescing operator, programmers would use the logical OR operator (||). But where??looks specifically fornullorundefined, the||operator looks for anyfalsyvalue:null,undefined,"",0,NaN, and of course,false. In the following example,awill be assigned the value ofbif the value ofbistruthy, otherwise it will be assigned 3. Kotlinuses the?:operator.[14]This is an unusual choice of symbol, given that?:is typically used for theElvis operator, not null coalescing, but it was inspired byGroovy (programming language)where null is considered false. InObj-C, the nil coalescing operator is?:. It can be used to provide a default for nil references: This is the same as writing InPerl(starting with version 5.10), the operator is//and the equivalent Perl code is: Thepossibly_null_valueis evaluated asnullornot-null(in Perl terminology,undefinedordefined). On the basis of the evaluation, the expression returns eithervalue_if_nullwhenpossibly_null_valueis null, orpossibly_null_valueotherwise. In the absence ofside-effectsthis is similar to the wayternary operators(?:statements) work in languages that support them. The above Perl code is equivalent to the use of the ternary operator below: This operator's most common usage is to minimize the amount of code used for a simple null check. Perl additionally has a//=assignment operator, where is largely equivalent to: This operator differs from Perl's older||and||=operators in that it considersdefinedness,nottruth. Thus they behave differently on values that are false but defined, such as 0 or "" (a zero-length string): PHP 7.0 introduced[15]a null-coalescing operator with the??syntax. This checks strictly for NULL or a non-existent variable/array index/property. In this respect, it acts similarly to PHP'sisset()pseudo-function: Version 7.4 of PHP introduced the Null Coalescing Assignment Operator with the??=syntax:[16] Since PowerShell 7, the??null coalescing operator provides this functionality.[7] SinceRversion 4.4.0 the%||%operator is included in base R (previously it was a feature of some packages likerlang).[17] While there's nonullinRust,tagged unionsare used for the same purpose. For example,Result<T, E>orOption<T>. Any type implementing the Try trait can be unwrapped. unwrap_or()serves a similar purpose as the null coalescing operator in other languages. Alternatively,unwrap_or_else()can be used to use the result of a function as a default value. In Oracle'sPL/SQL, theNVL() function provides the same outcome: InSQL Server/Transact-SQLthere is the ISNULL function that follows the same prototype pattern: Attention should be taken to not confuseISNULLwithIS NULL– the latter serves to evaluate whether some contents are defined to beNULLor not. The ANSI SQL-92 standard includes the COALESCE function implemented inOracle,[18]SQL Server,[19]PostgreSQL,[20]SQLite[21]andMySQL.[22]The COALESCE function returns the first argument that is not null. If all terms are null, returns null. The difference between ISNULL and COALESCE is that the type returned by ISNULL is the type of the leftmost value while COALESCE returns the type of the first non-null value. InSwift, the nil coalescing operator is??. It is used to provide a default when unwrapping anoptional type: For example, if one wishes to implement some Swift code to give a page a default title if none is present, one may use the following statement: instead of the more verbose
https://en.wikipedia.org/wiki/Null_coalescing_operator
Inprobabilityandstatistics, acompound probability distribution(also known as amixture distributionorcontagious distribution) is theprobability distributionthat results from assuming that arandom variableis distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables. If the parameter is ascale parameter, the resulting mixture is also called ascale mixture. The compound distribution ("unconditional distribution") is the result ofmarginalizing(integrating) over thelatentrandom variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution"). Acompound probability distributionis the probability distribution that results from assuming that a random variableX{\displaystyle X}is distributed according to some parametrized distributionF{\displaystyle F}with an unknown parameterθ{\displaystyle \theta }that is again distributed according to some other distributionG{\displaystyle G}. The resulting distributionH{\displaystyle H}is said to be the distribution that results from compoundingF{\displaystyle F}withG{\displaystyle G}. The parameter's distributionG{\displaystyle G}is also called themixing distributionorlatent distribution. Technically, theunconditionaldistributionH{\displaystyle H}results frommarginalizingoverG{\displaystyle G}, i.e., from integrating out the unknown parameter(s)θ{\displaystyle \theta }. Itsprobability density functionis given by: The same formula applies analogously if some or all of the variables are vectors. From the above formula, one can see that a compound distribution essentially is a special case of amarginal distribution: Thejoint distributionofx{\displaystyle x}andθ{\displaystyle \theta }is given byp(x,θ)=p(x|θ)p(θ){\displaystyle p(x,\theta )=p(x|\theta )p(\theta )}, and the compound results as its marginal distribution:p(x)=∫p(x,θ)dθ{\displaystyle {\textstyle p(x)=\int p(x,\theta )\operatorname {d} \!\theta }}. If the domain ofθ{\displaystyle \theta }is discrete, then the distribution is again a special case of amixture distribution. The compound distributionH{\displaystyle H}will depend on the specific expression of each distribution, as well as which parameter ofF{\displaystyle F}is distributed according to the distributionG{\displaystyle G}, and the parameters ofH{\displaystyle H}will include any parameters ofG{\displaystyle G}that are not marginalized, or integrated, out. ThesupportofH{\displaystyle H}is the same as that ofF{\displaystyle F}, and if the latter is a two-parameter distribution parameterized with the mean and variance, some general properties exist. The compound distribution's first twomomentsare given by thelaw of total expectationand thelaw of total variance: EH⁡[X]=EG⁡[EF⁡[X|θ]]{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}{\bigl [}\operatorname {E} _{F}[X|\theta ]{\bigr ]}} VarH⁡(X)=EG⁡[VarF⁡(X|θ)]+VarG⁡(EF⁡[X|θ]){\displaystyle \operatorname {Var} _{H}(X)=\operatorname {E} _{G}{\bigl [}\operatorname {Var} _{F}(X|\theta ){\bigr ]}+\operatorname {Var} _{G}{\bigl (}\operatorname {E} _{F}[X|\theta ]{\bigr )}} If the mean ofF{\displaystyle F}is distributed asG{\displaystyle G}, which in turn has meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}the expressions above implyEH⁡[X]=EG⁡[θ]=μ{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}[\theta ]=\mu }andVarH⁡(X)=VarF⁡(X|θ)+VarG⁡(Y)=τ2+σ2{\displaystyle \operatorname {Var} _{H}(X)=\operatorname {Var} _{F}(X|\theta )+\operatorname {Var} _{G}(Y)=\tau ^{2}+\sigma ^{2}}, whereτ2{\displaystyle \tau ^{2}}is the variance ofF{\displaystyle F}. letF{\displaystyle F}andG{\displaystyle G}be probability distributions parameterized with mean a variance asx∼F(θ,τ2)θ∼G(μ,σ2){\displaystyle {\begin{aligned}x&\sim {\mathcal {F}}(\theta ,\tau ^{2})\\\theta &\sim {\mathcal {G}}(\mu ,\sigma ^{2})\end{aligned}}}then denoting the probability density functions asf(x|θ)=pF(x|θ){\displaystyle f(x|\theta )=p_{F}(x|\theta )}andg(θ)=pG(θ){\displaystyle g(\theta )=p_{G}(\theta )}respectively, andh(x){\displaystyle h(x)}being the probability density ofH{\displaystyle H}we haveEH⁡[X]=∫Fxh(x)dx=∫Fx∫Gf(x|θ)g(θ)dθdx=∫G∫Fxf(x|θ)dxg(θ)dθ=∫GEF⁡[X|θ]g(θ)dθ{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X]=\int _{F}xh(x)dx&=\int _{F}x\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}\int _{F}xf(x|\theta )dx\ g(\theta )d\theta \\&=\int _{G}\operatorname {E} _{F}[X|\theta ]g(\theta )d\theta \end{aligned}}}and we have from the parameterizationF{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}thatEF⁡[X|θ]=∫Fxf(x|θ)dx=θEG⁡[θ]=∫Gθg(θ)dθ=μ{\displaystyle {\begin{aligned}\operatorname {E} _{F}[X|\theta ]&=\int _{F}xf(x|\theta )dx=\theta \\\operatorname {E} _{G}[\theta ]&=\int _{G}\theta g(\theta )d\theta =\mu \end{aligned}}}and therefore the mean of the compound distributionEH⁡[X]=μ{\displaystyle \operatorname {E} _{H}[X]=\mu }as per the expression for its first moment above. The variance ofH{\displaystyle H}is given byEH⁡[X2]−(EH⁡[X])2{\displaystyle \operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}}, andEH⁡[X2]=∫Fx2h(x)dx=∫Fx2∫Gf(x|θ)g(θ)dθdx=∫Gg(θ)∫Fx2f(x|θ)dxdθ=∫Gg(θ)(τ2+θ2)dθ=τ2∫Gg(θ)dθ+∫Gg(θ)θ2dθ=τ2+(σ2+μ2),{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X^{2}]=\int _{F}x^{2}h(x)dx&=\int _{F}x^{2}\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}g(\theta )\int _{F}x^{2}f(x|\theta )dx\ d\theta \\&=\int _{G}g(\theta )(\tau ^{2}+\theta ^{2})d\theta \\&=\tau ^{2}\int _{G}g(\theta )d\theta +\int _{G}g(\theta )\theta ^{2}d\theta \\&=\tau ^{2}+(\sigma ^{2}+\mu ^{2}),\end{aligned}}}given the fact that∫Fx2f(x∣θ)dx=EF⁡[X2∣θ]=VarF⁡(X∣θ)+(EF⁡[X∣θ])2{\displaystyle \int _{F}x^{2}f(x\mid \theta )dx=\operatorname {E} _{F}[X^{2}\mid \theta ]=\operatorname {Var} _{F}(X\mid \theta )+(\operatorname {E} _{F}[X\mid \theta ])^{2}}and∫Gθ2g(θ)dθ=EG⁡[θ2]=VarG⁡(θ)+(EG⁡[θ])2{\displaystyle \int _{G}\theta ^{2}g(\theta )d\theta =\operatorname {E} _{G}[\theta ^{2}]=\operatorname {Var} _{G}(\theta )+(\operatorname {E} _{G}[\theta ])^{2}}. Finally we getVarH⁡(X)=EH⁡[X2]−(EH⁡[X])2=τ2+σ2{\displaystyle {\begin{aligned}\operatorname {Var} _{H}(X)&=\operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}\\&=\tau ^{2}+\sigma ^{2}\end{aligned}}} Distributions of commontest statisticsresult as compound distributions under their null hypothesis, for example inStudent's t-test(where the test statistic results as the ratio of anormaland achi-squaredrandom variable), or in theF-test(where the test statistic is the ratio of twochi-squaredrandom variables). Compound distributions are useful for modeling outcomes exhibitingoverdispersion, i.e., a greater amount of variability than would be expected under a certain model. For example, count data are commonly modeled using thePoisson distribution, whose variance is equal to its mean. The distribution may be generalized by allowing for variability in itsrate parameter, implemented via agamma distribution, which results in a marginalnegative binomial distribution. This distribution is similar in its shape to the Poisson distribution, but it allows for larger variances. Similarly, abinomial distributionmay be generalized to allow for additional variability by compounding it with abeta distributionfor its success probability parameter, which results in abeta-binomial distribution. Besides ubiquitous marginal distributions that may be seen as special cases of compound distributions, inBayesian inference, compound distributions arise when, in the notation above,Frepresents the distribution of future observations andGis theposterior distributionof the parameters ofF, given the information in a set of observed data. This gives aposterior predictive distribution. Correspondingly, for theprior predictive distribution,Fis the distribution of a new data point whileGis theprior distributionof the parameters. Convolutionof probability distributions (to derive the probability distribution of sums of random variables) may also be seen as a special case of compounding; here the sum's distribution essentially results from considering one summand as a randomlocation parameterfor the other summand.[1] Compound distributions derived fromexponential familydistributions often have a closed form. If analytical integration is not possible, numerical methods may be necessary. Compound distributions may relatively easily be investigated usingMonte Carlo methods, i.e., by generating random samples. It is often easy to generate random numbers from the distributionsp(θ){\displaystyle p(\theta )}as well asp(x|θ){\displaystyle p(x|\theta )}and then utilize these to performcollapsed Gibbs samplingto generate samples fromp(x){\displaystyle p(x)}. A compound distribution may usually also be approximated to a sufficient degree by amixture distributionusing a finite number of mixture components, allowing to derive approximate density, distribution function etc.[1] Parameter estimation(maximum-likelihoodormaximum-a-posterioriestimation) within a compound distribution model may sometimes be simplified by utilizing theEM-algorithm.[2] The notion of "compound distribution" as used e.g. in the definition of aCompound Poisson distributionorCompound Poisson processis different from the definition found in this article. The meaning in this article corresponds to what is used in e.g.Bayesian hierarchical modeling. The special case for compound probability distributions where the parametrized distributionF{\displaystyle F}is thePoisson distributionis also calledmixed Poisson distribution.
https://en.wikipedia.org/wiki/Compound_distribution
Inmathematics,Stirling's approximation(orStirling's formula) is anasymptoticapproximation forfactorials. It is a good approximation, leading to accurate results even for small values ofn{\displaystyle n}. It is named afterJames Stirling, though a related but less precise result was first stated byAbraham de Moivre.[1][2][3] One way of stating the approximation involves thelogarithmof the factorial:ln⁡(n!)=nln⁡n−n+O(ln⁡n),{\displaystyle \ln(n!)=n\ln n-n+O(\ln n),}where thebig O notationmeans that, for all sufficiently large values ofn{\displaystyle n}, the difference betweenln⁡(n!){\displaystyle \ln(n!)}andnln⁡n−n{\displaystyle n\ln n-n}will be at most proportional to the logarithm ofn{\displaystyle n}. In computer science applications such as theworst-case lower bound for comparison sorting, it is convenient to instead use thebinary logarithm, giving the equivalent formlog2⁡(n!)=nlog2⁡n−nlog2⁡e+O(log2⁡n).{\displaystyle \log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).}The error term in either base can be expressed more precisely as12log2⁡(2πn)+O(1n){\displaystyle {\tfrac {1}{2}}\log _{2}(2\pi n)+O({\tfrac {1}{n}})}, corresponding to an approximate formula for the factorial itself,n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}Here the sign∼{\displaystyle \sim }means that the two quantities are asymptotic, that is, their ratio tends to 1 asn{\displaystyle n}tends to infinity. Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sumln⁡(n!)=∑j=1nln⁡j{\displaystyle \ln(n!)=\sum _{j=1}^{n}\ln j}with anintegral:∑j=1nln⁡j≈∫1nln⁡xdx=nln⁡n−n+1.{\displaystyle \sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.} The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximatingn!{\displaystyle n!}, one considers itsnatural logarithm, as this is aslowly varying function:ln⁡(n!)=ln⁡1+ln⁡2+⋯+ln⁡n.{\displaystyle \ln(n!)=\ln 1+\ln 2+\cdots +\ln n.} The right-hand side of this equation minus12(ln⁡1+ln⁡n)=12ln⁡n{\displaystyle {\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n}is the approximation by thetrapezoid ruleof the integralln⁡(n!)−12ln⁡n≈∫1nln⁡xdx=nln⁡n−n+1,{\displaystyle \ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,} and the error in this approximation is given by theEuler–Maclaurin formula:ln⁡(n!)−12ln⁡n=12ln⁡1+ln⁡2+ln⁡3+⋯+ln⁡(n−1)+12ln⁡n=nln⁡n−n+1+∑k=2m(−1)kBkk(k−1)(1nk−1−1)+Rm,n,{\displaystyle {\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}} whereBk{\displaystyle B_{k}}is aBernoulli number, andRm,nis the remainder term in the Euler–Maclaurin formula. Take limits to find thatlimn→∞(ln⁡(n!)−nln⁡n+n−12ln⁡n)=1−∑k=2m(−1)kBkk(k−1)+limn→∞Rm,n.{\displaystyle \lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.} Denote this limit asy{\displaystyle y}. Because the remainderRm,nin the Euler–Maclaurin formula satisfiesRm,n=limn→∞Rm,n+O(1nm),{\displaystyle R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),} wherebig-O notationis used, combining the equations above yields the approximation formula in its logarithmic form:ln⁡(n!)=nln⁡(ne)+12ln⁡n+y+∑k=2m(−1)kBkk(k−1)nk−1+O(1nm).{\displaystyle \ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).} Taking the exponential of both sides and choosing any positive integerm{\displaystyle m}, one obtains a formula involving an unknown quantityey{\displaystyle e^{y}}. Form= 1, the formula isn!=eyn(ne)n(1+O(1n)).{\displaystyle n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} The quantityey{\displaystyle e^{y}}can be found by taking the limit on both sides asn{\displaystyle n}tends to infinity and usingWallis' product, which shows thatey=2π{\displaystyle e^{y}={\sqrt {2\pi }}}. Therefore, one obtains Stirling's formula:n!=2πn(ne)n(1+O(1n)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} An alternative formula forn!{\displaystyle n!}using thegamma functionisn!=∫0∞xne−xdx.{\displaystyle n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.}(as can be seen by repeated integration by parts). Rewriting and changing variablesx=ny, one obtainsn!=∫0∞enln⁡x−xdx=enln⁡nn∫0∞en(ln⁡y−y)dy.{\displaystyle n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.}ApplyingLaplace's methodone has∫0∞en(ln⁡y−y)dy∼2πne−n,{\displaystyle \int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},}which recovers Stirling's formula:n!∼enln⁡nn2πne−n=2πn(ne)n.{\displaystyle n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.} In fact, further corrections can also be obtained using Laplace's method. From previous result, we know thatΓ(x)∼xxe−x{\displaystyle \Gamma (x)\sim x^{x}e^{-x}}, so we "peel off" this dominant term, then perform two changes of variables, to obtain:x−xexΓ(x)=∫Rex(1+t−et)dt{\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt}To verify this:∫Rex(1+t−et)dt=t↦ln⁡tex∫0∞tx−1e−xtdt=t↦t/xx−xex∫0∞e−ttx−1dt=x−xexΓ(x){\displaystyle \int _{\mathbb {R} }e^{x(1+t-e^{t})}dt{\overset {t\mapsto \ln t}{=}}e^{x}\int _{0}^{\infty }t^{x-1}e^{-xt}dt{\overset {t\mapsto t/x}{=}}x^{-x}e^{x}\int _{0}^{\infty }e^{-t}t^{x-1}dt=x^{-x}e^{x}\Gamma (x)}. Now the functiont↦1+t−et{\displaystyle t\mapsto 1+t-e^{t}}is unimodal, with maximum value zero. Locally around zero, it looks like−t2/2{\displaystyle -t^{2}/2}, which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by1+t−et=−τ2/2{\displaystyle 1+t-e^{t}=-\tau ^{2}/2}. This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives ust=τ−τ2/6+τ3/36+a4τ4+O(τ5){\displaystyle t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})}. Now plug back to the equation to obtainx−xexΓ(x)=∫Re−xτ2/2(1−τ/3+τ2/12+4a4τ3+O(τ4))dτ=2π(x−1/2+x−3/2/12)+O(x−5/2){\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})}notice how we don't need to actually finda4{\displaystyle a_{4}}, since it is cancelled out by the integral. Higher orders can be achieved by computing more terms int=τ+⋯{\displaystyle t=\tau +\cdots }, which can be obtained programmatically.[note 1] Thus we get Stirling's formula to two orders:n!=2πn(ne)n(1+112n+O(1n2)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).} A complex-analysis version of this method[4]is to consider1n!{\displaystyle {\frac {1}{n!}}}as aTaylor coefficientof the exponential functionez=∑n=0∞znn!{\displaystyle e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}}, computed byCauchy's integral formulaas1n!=12πi∮|z|=rezzn+1dz.{\displaystyle {\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.} This line integral can then be approximated using thesaddle-point methodwith an appropriate choice of contour radiusr=rn{\displaystyle r=r_{n}}. The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. An alternative version uses the fact that thePoisson distributionconverges to anormal distributionby theCentral Limit Theorem.[5] Since the Poisson distribution with parameterλ{\displaystyle \lambda }converges to a normal distribution with meanλ{\displaystyle \lambda }and varianceλ{\displaystyle \lambda }, theirdensity functionswill be approximately the same: exp⁡(−μ)μxx!≈12πμexp⁡(−12(x−μμ)){\displaystyle {\frac {\exp(-\mu )\mu ^{x}}{x!}}\approx {\frac {1}{\sqrt {2\pi \mu }}}\exp(-{\frac {1}{2}}({\frac {x-\mu }{\sqrt {\mu }}}))} Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to: exp⁡(−μ)μμμ!≈12πμ{\displaystyle {\frac {\exp(-\mu )\mu ^{\mu }}{\mu !}}\approx {\frac {1}{\sqrt {2\pi \mu }}}} Taking logs then results in: −μ+μln⁡μ−ln⁡μ!≈−12ln⁡2πμ{\displaystyle -\mu +\mu \ln \mu -\ln \mu !\approx -{\frac {1}{2}}\ln 2\pi \mu } which can easily be rearranged to give: ln⁡μ!≈μln⁡μ−μ+12ln⁡2πμ{\displaystyle \ln \mu !\approx \mu \ln \mu -\mu +{\frac {1}{2}}\ln 2\pi \mu } Evaluating atμ=n{\displaystyle \mu =n}gives the usual, more precise form of Stirling's approximation. Stirling's formula is in fact the first approximation to the following series (now called theStirling series):[6]n!∼2πn(ne)n(1+112n+1288n2−13951840n3−5712488320n4+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).} An explicit formula for the coefficients in this series was given by G. Nemes.[7]Further terms are listed in theOn-Line Encyclopedia of Integer SequencesasA001163andA001164. The first graph in this section shows therelative errorvs.n{\displaystyle n}, for 1 through all 5 terms listed above. (Bender and Orszag[8]p. 218) gives the asymptotic formula for the coefficients:A2j+1∼(−1)j2(2j)!/(2π)2(j+1){\displaystyle A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}}which shows that it grows superexponentially, and that by theratio testtheradius of convergenceis zero. Asn→ ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of anasymptotic expansion. It is not aconvergent series; for anyparticularvalue ofn{\displaystyle n}there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, letS(n,t)be the Stirling series tot{\displaystyle t}terms evaluated atn{\displaystyle n}. The graphs show|ln⁡(S(n,t)n!)|,{\displaystyle \left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,}which, when small, is essentially the relative error. Writing Stirling's series in the formln⁡(n!)∼nln⁡n−n+12ln⁡(2πn)+112n−1360n3+11260n5−11680n7+⋯,{\displaystyle \ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,}it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term.[citation needed] Other bounds, due to Robbins,[9]valid for all positive integersn{\displaystyle n}are2πn(ne)ne112n+1<n!<2πn(ne)ne112n.{\displaystyle {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.}This upper bound corresponds to stopping the above series forln⁡(n!){\displaystyle \ln(n!)}after the1n{\displaystyle {\frac {1}{n}}}term. The lower bound is weaker than that obtained by stopping the series after the1n3{\displaystyle {\frac {1}{n^{3}}}}term. A looser version of this bound is thatn!ennn+12∈(2π,e]{\displaystyle {\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]}for alln≥1{\displaystyle n\geq 1}. For all positive integers,n!=Γ(n+1),{\displaystyle n!=\Gamma (n+1),}whereΓdenotes thegamma function. However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. IfRe(z) > 0, thenln⁡Γ(z)=zln⁡z−z+12ln⁡2πz+∫0∞2arctan⁡(tz)e2πt−1dt.{\displaystyle \ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.} Repeated integration by parts givesln⁡Γ(z)∼zln⁡z−z+12ln⁡2πz+∑n=1N−1B2n2n(2n−1)z2n−1=zln⁡z−z+12ln⁡2πz+112z−1360z3+11260z5+…,{\displaystyle {\begin{aligned}\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}}\\=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+\dots ,\end{aligned}}} whereBn{\displaystyle B_{n}}is then{\displaystyle n}thBernoulli number(note that the limit of the sum asN→∞{\displaystyle N\to \infty }is not convergent, so this formula is just anasymptotic expansion). The formula is valid forz{\displaystyle z}large enough in absolute value, when|arg(z)| < π −ε, whereεis positive, with an error term ofO(z−2N+ 1). The corresponding approximation may now be written:Γ(z)=2πz(ze)z(1+O(1z)).{\displaystyle \Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).} where the expansion is identical to that of Stirling's series above forn!{\displaystyle n!}, except thatn{\displaystyle n}is replaced withz− 1.[10] A further application of this asymptotic expansion is for complex argumentzwith constantRe(z). See for example the Stirling formula applied inIm(z) =tof theRiemann–Siegel theta functionon the straight line⁠1/4⁠+it. Thomas Bayesshowed, in a letter toJohn Cantonpublished by theRoyal Societyin 1763, that Stirling's formula did not give aconvergent series.[11]Obtaining a convergent version of Stirling's formula entails evaluatingBinet's formula:∫0∞2arctan⁡(tx)e2πt−1dt=ln⁡Γ(x)−xln⁡x+x−12ln⁡2πx.{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.} One way to do this is by means of a convergent series of invertedrising factorials. Ifzn¯=z(z+1)⋯(z+n−1),{\displaystyle z^{\bar {n}}=z(z+1)\cdots (z+n-1),}then∫0∞2arctan⁡(tx)e2πt−1dt=∑n=1∞cn(x+1)n¯,{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},}wherecn=1n∫01xn¯(x−12)dx=12n∑k=1nk|s(n,k)|(k+1)(k+2),{\displaystyle c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},}wheres(n,k)denotes theStirling numbers of the first kind. From this one obtains a version of Stirling's seriesln⁡Γ(x)=xln⁡x−x+12ln⁡2πx+112(x+1)+112(x+1)(x+2)++59360(x+1)(x+2)(x+3)+2960(x+1)(x+2)(x+3)(x+4)+⋯,{\displaystyle {\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}}which converges whenRe(x) > 0. Stirling's formula may also be given in convergent form as[12]Γ(x)=2πxx−12e−x+μ(x){\displaystyle \Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}}whereμ(x)=∑n=0∞((x+n+12)ln⁡(1+1x+n)−1).{\displaystyle \mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).} The approximationΓ(z)≈2πz(zezsinh⁡1z+1810z6)z{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}}and its equivalent form2ln⁡Γ(z)≈ln⁡(2π)−ln⁡z+z(2ln⁡z+ln⁡(zsinh⁡1z+1810z6)−2){\displaystyle 2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)}can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultantpower seriesand theTaylor seriesexpansion of thehyperbolic sinefunction. This approximation is good to more than 8 decimal digits forzwith a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.[13] Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:[14]Γ(z)≈2πz(1e(z+112z−110z))z,{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},}or equivalently,ln⁡Γ(z)≈12(ln⁡(2π)−ln⁡z)+z(ln⁡(z+112z−110z)−1).{\displaystyle \ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).} An alternative approximation for the gamma function stated bySrinivasa RamanujaninRamanujan's lost notebook[15]isΓ(1+x)≈π(xe)x(8x3+4x2+x+130)16{\displaystyle \Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}}forx≥ 0. The equivalent approximation forlnn!has an asymptotic error of⁠1/1400n3⁠and is given byln⁡n!≈nln⁡n−n+16ln⁡(8n3+4n2+n+130)+12ln⁡π.{\displaystyle \ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .} The approximation may be made precise by giving paired upper and lower bounds; one such inequality is[16][17][18][19]π(xe)x(8x3+4x2+x+1100)1/6<Γ(1+x)<π(xe)x(8x3+4x2+x+130)1/6.{\displaystyle {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.} The formula was first discovered byAbraham de Moivre[2]in the formn!∼[constant]⋅nn+12e−n.{\displaystyle n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.} De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely2π{\displaystyle {\sqrt {2\pi }}}.[3]
https://en.wikipedia.org/wiki/Stirling%27s_approximation
μ∈(−∞,∞){\displaystyle \mu \in (-\infty ,\infty )\,}location(real)σ∈(0,∞){\displaystyle \sigma \in (0,\infty )\,}scale(real) x⩾μ(ξ⩾0){\displaystyle x\geqslant \mu \,\;(\xi \geqslant 0)} 1σ(1+ξz)−(1/ξ+1){\displaystyle {\frac {1}{\sigma }}(1+\xi z)^{-(1/\xi +1)}} Instatistics, thegeneralized Pareto distribution(GPD) is a family of continuousprobability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: locationμ{\displaystyle \mu }, scaleσ{\displaystyle \sigma }, and shapeξ{\displaystyle \xi }.[2][3]Sometimes it is specified by only scale and shape[4]and sometimes only by its shape parameter. Some references give the shape parameter asκ=−ξ{\displaystyle \kappa =-\xi \,}.[5] The standard cumulative distribution function (cdf) of the GPD is defined by[6] where the support isz≥0{\displaystyle z\geq 0}forξ≥0{\displaystyle \xi \geq 0}and0≤z≤−1/ξ{\displaystyle 0\leq z\leq -1/\xi }forξ<0{\displaystyle \xi <0}. The corresponding probability density function (pdf) is The related location-scale family of distributions is obtained by replacing the argumentzbyx−μσ{\displaystyle {\frac {x-\mu }{\sigma }}}and adjusting the support accordingly. Thecumulative distribution functionofX∼GPD(μ,σ,ξ){\displaystyle X\sim GPD(\mu ,\sigma ,\xi )}(μ∈R{\displaystyle \mu \in \mathbb {R} },σ>0{\displaystyle \sigma >0}, andξ∈R{\displaystyle \xi \in \mathbb {R} }) is where the support ofX{\displaystyle X}isx⩾μ{\displaystyle x\geqslant \mu }whenξ⩾0{\displaystyle \xi \geqslant 0\,}, andμ⩽x⩽μ−σ/ξ{\displaystyle \mu \leqslant x\leqslant \mu -\sigma /\xi }whenξ<0{\displaystyle \xi <0}. Theprobability density function(pdf) ofX∼GPD(μ,σ,ξ){\displaystyle X\sim GPD(\mu ,\sigma ,\xi )}is again, forx⩾μ{\displaystyle x\geqslant \mu }whenξ⩾0{\displaystyle \xi \geqslant 0}, andμ⩽x⩽μ−σ/ξ{\displaystyle \mu \leqslant x\leqslant \mu -\sigma /\xi }whenξ<0{\displaystyle \xi <0}. The pdf is a solution of the followingdifferential equation:[citation needed] IfUisuniformly distributedon (0, 1], then and Both formulas are obtained by inversion of the cdf. In Matlab Statistics Toolbox, you can easily use "gprnd" command to generate generalized Pareto random numbers. A GPD random variable can also be expressed as an exponential random variable, with a Gamma distributed rate parameter. and then Notice however, that since the parameters for the Gamma distribution must be greater than zero, we obtain the additional restrictions thatξ{\displaystyle \ \xi \ }must be positive. In addition to this mixture (or compound) expression, the generalized Pareto distribution can also be expressed as a simple ratio. Concretely, forY∼Exponential⁡(1){\displaystyle \ Y\sim \operatorname {\mathsf {Exponential}} (\ 1\ )\ }andZ∼Gamma⁡(1/ξ,1),{\displaystyle \ Z\sim \operatorname {\mathsf {Gamma}} (1/\xi ,\ 1)\ ,}we haveμ+σYξZ∼GPD⁡(μ,σ,ξ).{\displaystyle \ \mu +{\frac {\ \sigma \ Y\ }{\ \xi \ Z\ }}\sim \operatorname {\mathsf {GPD}} (\mu ,\ \sigma ,\ \xi )~.}This is a consequence of the mixture after settingβ=α{\displaystyle \ \beta =\alpha \ }and taking into account that the rate parameters of the exponential and gamma distribution are simply inverse multiplicative constants. IfX∼GPD{\displaystyle X\sim GPD}({\displaystyle (}μ=0{\displaystyle \mu =0},σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}, thenY=log⁡(X){\displaystyle Y=\log(X)}is distributed according to theexponentiated generalized Pareto distribution, denoted byY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}. Theprobability density function(pdf) ofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi })(σ>0){\displaystyle )\,\,(\sigma >0)}is where the support is−∞<y<∞{\displaystyle -\infty <y<\infty }forξ≥0{\displaystyle \xi \geq 0}, and−∞<y≤log⁡(−σ/ξ){\displaystyle -\infty <y\leq \log(-\sigma /\xi )}forξ<0{\displaystyle \xi <0}. For allξ{\displaystyle \xi }, thelog⁡σ{\displaystyle \log \sigma }becomes the location parameter. See the right panel for the pdf when the shapeξ{\displaystyle \xi }is positive. TheexGPDhas finite moments of all orders for allσ>0{\displaystyle \sigma >0}and−∞<ξ<∞{\displaystyle -\infty <\xi <\infty }. Themoment-generating functionofY∼exGPD(σ,ξ){\displaystyle Y\sim exGPD(\sigma ,\xi )}is whereB(a,b){\displaystyle B(a,b)}andΓ(a){\displaystyle \Gamma (a)}denote thebeta functionandgamma function, respectively. Theexpected valueofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}depends on the scaleσ{\displaystyle \sigma }and shapeξ{\displaystyle \xi }parameters, while theξ{\displaystyle \xi }participates through thedigamma function: Note that for a fixed value for theξ∈(−∞,∞){\displaystyle \xi \in (-\infty ,\infty )}, thelog⁡σ{\displaystyle \log \ \sigma }plays as the location parameter under the exponentiated generalized Pareto distribution. ThevarianceofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}depends on the shape parameterξ{\displaystyle \xi }only through thepolygamma functionof order 1 (also called thetrigamma function): See the right panel for the variance as a function ofξ{\displaystyle \xi }. Note thatψ′(1)=π2/6≈1.644934{\displaystyle \psi '(1)=\pi ^{2}/6\approx 1.644934}. Note that the roles of the scale parameterσ{\displaystyle \sigma }and the shape parameterξ{\displaystyle \xi }underY∼exGPD(σ,ξ){\displaystyle Y\sim exGPD(\sigma ,\xi )}are separably interpretable, which may lead to a robust efficient estimation for theξ{\displaystyle \xi }than using theX∼GPD(σ,ξ){\displaystyle X\sim GPD(\sigma ,\xi )}[2]. The roles of the two parameters are associated each other underX∼GPD(μ=0,σ,ξ){\displaystyle X\sim GPD(\mu =0,\sigma ,\xi )}(at least up to the second central moment); see the formula of varianceVar(X){\displaystyle Var(X)}wherein both parameters are participated. Assume thatX1:n=(X1,⋯,Xn){\displaystyle X_{1:n}=(X_{1},\cdots ,X_{n})}aren{\displaystyle n}observations (need not be i.i.d.) from an unknownheavy-tailed distributionF{\displaystyle F}such that its tail distribution is regularly varying with the tail-index1/ξ{\displaystyle 1/\xi }(hence, the corresponding shape parameter isξ{\displaystyle \xi }). To be specific, the tail distribution is described as It is of a particular interest in theextreme value theoryto estimate the shape parameterξ{\displaystyle \xi }, especially whenξ{\displaystyle \xi }is positive (so called the heavy-tailed distribution). LetFu{\displaystyle F_{u}}be their conditional excess distribution function.Pickands–Balkema–de Haan theorem(Pickands, 1975; Balkema and de Haan, 1974) states that for a large class of underlying distribution functionsF{\displaystyle F}, and largeu{\displaystyle u},Fu{\displaystyle F_{u}}is well approximated by the generalized Pareto distribution (GPD), which motivated Peak Over Threshold (POT) methods to estimateξ{\displaystyle \xi }:the GPD plays the key role in POT approach. A renowned estimator using the POT methodology is theHill's estimator. Technical formulation of the Hill's estimator is as follows. For1≤i≤n{\displaystyle 1\leq i\leq n}, writeX(i){\displaystyle X_{(i)}}for thei{\displaystyle i}-th largest value ofX1,⋯,Xn{\displaystyle X_{1},\cdots ,X_{n}}. Then, with this notation, theHill's estimator(see page 190 of Reference 5 by Embrechts et al[3]) based on thek{\displaystyle k}upper order statistics is defined as In practice, the Hill estimator is used as follows. First, calculate the estimatorξ^kHill{\displaystyle {\widehat {\xi }}_{k}^{\text{Hill}}}at each integerk∈{2,⋯,n}{\displaystyle k\in \{2,\cdots ,n\}}, and then plot the ordered pairs{(k,ξ^kHill)}k=2n{\displaystyle \{(k,{\widehat {\xi }}_{k}^{\text{Hill}})\}_{k=2}^{n}}. Then, select from the set of Hill estimators{ξ^kHill}k=2n{\displaystyle \{{\widehat {\xi }}_{k}^{\text{Hill}}\}_{k=2}^{n}}which are roughly constant with respect tok{\displaystyle k}: these stable values are regarded as reasonable estimates for the shape parameterξ{\displaystyle \xi }. IfX1,⋯,Xn{\displaystyle X_{1},\cdots ,X_{n}}are i.i.d., then the Hill's estimator is a consistent estimator for the shape parameterξ{\displaystyle \xi }[4]. Note that theHill estimatorξ^kHill{\displaystyle {\widehat {\xi }}_{k}^{\text{Hill}}}makes a use of the log-transformation for the observationsX1:n=(X1,⋯,Xn){\displaystyle X_{1:n}=(X_{1},\cdots ,X_{n})}. (ThePickand's estimatorξ^kPickand{\displaystyle {\widehat {\xi }}_{k}^{\text{Pickand}}}also employed the log-transformation, but in a slightly different way[5].)
https://en.wikipedia.org/wiki/Generalized_Pareto_distribution
Theglobal brainis a neuroscience-inspired and futurological vision of the planetaryinformation and communications technologynetworkthat interconnects allhumansand their technological artifacts.[1]As this network stores ever moreinformation, takes over ever more functions of coordination and communication from traditional organizations, and becomes increasinglyintelligent, it increasingly plays the role of abrainfor the planetEarth. In thephilosophy of mind, global brain finds an analog inAverroes'stheory of the unity of the intellect. Proponents of the global brain hypothesis claim that theInternetincreasingly ties its users together into a single information processing system that functions as part of the collectivenervous systemof the planet. The intelligence of this network iscollectiveordistributed: it is not centralized or localized in any particular individual, organization or computer system. Therefore, no one can command or control it. Rather, itself-organizesoremergesfrom thedynamic networksofinteractionsbetween its components. This is a property typical ofcomplex adaptive systems.[2] TheWorld Wide Webin particular resembles the organization of a brain with itsweb pages(playing a role similar toneurons) connected byhyperlinks(playing a role similar tosynapses), together forming anassociativenetwork along which information propagates.[3]This analogy becomes stronger with the rise ofsocial media, such asFacebook, where links between personal pages represent relationships in asocial networkalong which information propagates from person to person.[4]Such propagation is similar to thespreading activationthatneural networksin the brain use to process information in a parallel, distributed manner. Although some of the underlying ideas were already expressed byNikola Teslain the late 19th century and were written about by many others before him, the term "global brain" was coined in 1982 by Peter Russell in his bookThe Global Brain.[5]How the Internet might be developed to achieve this was set out in 1986.[6]The first peer-reviewed article on the subject was published byGottfried Mayer-Kressin 1995,[7]while the firstalgorithmsthat could turn the world-wide web into a collectively intelligent network were proposed byFrancis HeylighenandJohan Bollenin 1996.[3][8] Reviewing the strands of intellectual history that contributed to the global brain hypothesis,Francis Heylighendistinguishes four perspectives:organicism,encyclopedism,emergentismandevolutionary cybernetics. He asserts that these developed in relative independence but now are converging in his own scientific re-formulation.[9] In the 19th century, the sociologistHerbert Spencersaw society as asocial organismand reflected about its need for a nervous system. EntomologistWilliam Wheelerdeveloped the concept of the ant colony as a spatially extended organism, and in the 1930s he coined the termsuperorganismto describe such an entity.[10]This concept was later adopted by thinkers such asJoël de Rosnayin the bookLe Cerveau Planétaire(1986) andGregory Stockin the bookMetaman(1993) to describe planetary society as a superorganism. The mental aspects of such an organic system at the planetary level were perhaps first broadly elaborated by palaeontologist and Jesuit priestPierre Teilhard de Chardin. In 1945, he described a coming "planetisation" of humanity, which he saw as the next phase of accelerating human "socialisation". Teilhard described both socialization and planetization as irreversible, irresistible processes ofmacrobiological developmentculminating in the emergence of anoosphere, or global mind (see Emergentism below).[11] The more recentliving systems theorydescribes both organisms and social systems in terms of the "critical subsystems" ("organs") they need to contain in order to survive, such as an internal transport system, a resource reserve, and a decision-making system. This theory has inspired several thinkers, including Peter Russell and Francis Heylighen to define the global brain as the network of information processing subsystems for the planetary social system. In the perspective of encyclopedism, the emphasis is on developing a universal knowledge network. The first systematic attempt to create such an integrated system of the world's knowledge was the 18th centuryEncyclopédieofDenis DiderotandJean le Rond d'Alembert. However, by the end of the 19th century, the amount of knowledge had become too large to be published in a single synthetic volume. To tackle this problem,Paul Otletfounded the science of documentation, now calledinformation science. In the 1930s he envisaged aWorld Wide Web-like system of associations between documents and telecommunication links that would make all the world's knowledge available immediately to anybody.H. G. Wellsproposed a similar vision of a collaboratively developed world encyclopedia that would be constantly updated by a global university-like institution. He called this aWorld Brain,[12]as it would function as a continuously updated memory for the planet, although the image of humanity acting informally as a more organic global brain is a recurring motif in many of his other works.[13] Tim Berners-Lee, the inventor of theWorld Wide Web, too, was inspired by the free-associative possibilities of the brain for his invention. The brain can link different kinds of information without any apparent link otherwise; Berners-Lee thought that computers could become much more powerful if they could imitate this functioning, i.e. make links between any arbitrary piece of information.[14]The most powerful implementation of encyclopedism to date isWikipedia, which integrates the associative powers of the world-wide-web with the collective intelligence of its millions of contributors, approaching the ideal of a global memory.[9]TheSemantic web, also first proposed by Berners-Lee, is a system of protocols to make the pieces of knowledge and their links readable by machines, so that they could be used to make automaticinferences, thus providing this brain-like network with some capacity for autonomous "thinking" or reflection. This approach focuses on the emergent aspects of the evolution and development ofcomplexity, including the spiritual, psychological, and moral-ethical aspects of the global brain, and is at present the most speculative approach. The global brain is here seen as a natural and emergent process of planetary evolutionary development. Here againPierre Teilhard de Chardinattempted a synthesis of science, social values, and religion in hisThe Phenomenon of Man, which argues that thetelos(drive, purpose) of universal evolutionary process is the development of greater levels of both complexity and consciousness. Teilhard proposed that if life persists then planetization, as a biological process producing a global brain, would necessarily also produce a global mind, a new level of planetary consciousness and a technologically supported network of thoughts which he called thenoosphere. Teilhard's proposed technological layer for the noosphere can be interpreted as an early anticipation of the Internet and the Web.[15] Systems theoristsandcyberneticianscommonly describe the emergence of a higher order system in evolutionary development as a "metasystem transition" (a concept introduced byValentin Turchin) or a "major evolutionary transition".[16]Such a metasystem consists of a group of subsystems that work together in a coordinated, goal-directed manner. It is as such much more powerful and intelligent than its constituent systems.Francis Heylighenhas argued that the global brain is an emerging metasystem with respect to the level of individual human intelligence, and investigated the specific evolutionary mechanisms that promote this transition.[17] In this scenario, the Internet fulfils the role of the network of "nerves" that interconnect the subsystems and thus coordinates their activity. The cybernetic approach makes it possible to develop mathematical models and simulations of the processes ofself-organizationthrough which such coordination andcollective intelligenceemerges. In 1994Kevin Kelly, in his popular bookOut of Control, posited the emergence of a "hive mind" from a discussion of cybernetics and evolutionary biology.[18] In 1996,Francis HeylighenandBen Goertzelfounded the Global Brain group, a discussion forum grouping most of the researchers that had been working on the subject of the global brain to further investigate this phenomenon. The group organized the first international conference on the topic in 2001 at theVrije Universiteit Brussel. After a period of relative neglect, the Global Brain idea has recently seen a resurgence in interest, in part due to talks given on the topic byTim O'Reilly, the Internet forecaster who popularized the termWeb 2.0,[19]andYuri Milner, the social media investor.[20]In January 2012, the Global Brain Institute (GBI) was founded at theVrije Universiteit Brusselto develop a mathematical theory of the "brainlike" propagation of information across the Internet. In the same year,Thomas W. Maloneand collaborators from theMIT Center for Collective Intelligencehave started to explore how the global brain could be "programmed" to work more effectively,[21]using mechanisms ofcollective intelligence. The complexity scientistDirk Helbingand his NervousNet group have recently started developing a "Planetary Nervous System", which includes a "Global Participatory Platform", as part of the large-scaleFuturICTproject, thus preparing some of the groundwork for a Global Brain.[22] In July 2017,Elon Muskfounded the companyNeuralink, which aims to create abrain-computer interface (BCI)with significantly greaterinformation bandwidththan traditionalhuman interface devices. Musk predicts thatartificial intelligence systemswill rapidly outpace human abilities in most domains and views them as an existential threat. He believes an advanced BCI would enable human cognition to remain relevant for longer. The firm raised $27m from 12 Investors in 2017.[23] A common criticism of the idea that humanity would become directed by a global brain is that this would reduce individual diversity and freedom,[24]and lead tomass surveillance.[25]This criticism is inspired bytotalitarianforms of government, as exemplified byGeorge Orwell's character of "Big Brother". It is also inspired by the analogy between collective intelligence orswarm intelligenceandinsect societies, such as beehives and ant colonies, in which individuals are essentially interchangeable. In a more extreme view, the global brain has been compared with theBorg,[26]a race of collectively thinking cyborgs conceived by theStar Trekscience fiction franchise. Global brain theorists reply that the emergence of distributed intelligence would lead to the exact opposite of this vision.[27][28]James Surowieckiin his bookThe Wisdom of Crowdsargued that the reason is that effectivecollective intelligencerequiresdiversityof opinion,decentralizationand individual independence. For more references, check theGBI bibliography:
https://en.wikipedia.org/wiki/Global_brain
Anasymmetric multiprocessing(AMPorASMP) system is amultiprocessorcomputer system where not all of the multiple interconnected central processing units (CPUs) are treated equally. For example, a system might allow (either at the hardware oroperating systemlevel) only one CPU to execute operating system code or might allow only one CPU to perform I/O operations. Other AMP systems might allow any CPU to execute operating system code and perform I/O operations, so that they were symmetric with regard to processor roles, but attached some or all peripherals to particular CPUs, so that they were asymmetric with respect to the peripheral attachment. Asymmetric multiprocessing was the only method for handling multiple CPUs beforesymmetric multiprocessing(SMP) was available. It has also been used to provide less expensive options[1]on systems where SMP was available. For the room-size computers of the 1960s and 1970s, a cost-effective way to increase compute power was to add a second CPU. Since these computers were already close to the fastest available (near the peak of the price:performance ratio), two standard-speed CPUs were much less expensive than a CPU that ran twice as fast. Also, adding a second CPU was less expensive than a second complete computer, which would need its own peripherals, thus requiring much more floor space and an increased operations staff. Notable early AMP offerings by computer manufacturers were theBurroughs B5000, theDECsystem-1055, and theIBM System/360model 65MP. There were also dual-CPU machines built at universities.[2] The problem with adding a second CPU to a computer system was that the operating system had been developed for single-CPU systems, and extending it to handle multiple CPUs efficiently and reliably took a long time. To fill this gap, operating systems intended for single CPUs were initially extended to provide minimal support for a second CPU. In this minimal support, the operating system ran on the “boot” processor, with the other only allowed to run user programs. In the case of the Burroughs B5000, the second processor's hardware was not capable of running "control state" code.[3] Other systems allowed the operating system to run on all processors, but either attached all the peripherals to one processor or attached particular peripherals to particular processors. An option on the Burroughs B5000 was “Processor B”. This second processor, unlike “Processor A” had no connection to the peripherals, though the two processors shared main memory, and Processor B could not run in Control State.[3]The operating system ran only on Processor A. When there was a user job to be executed, it might be run on Processor B, but when that job tried to access the operating system the processor halted and signaled Processor A. The requested operating system service was then run on Processor A. On the B5500, either Processor A or Processor B could be designated as Processor 1 by a switch on the engineer's panel, with the other processor being Processor 2; both processors shared main memory and had hardware access to the I/O processors hence the peripherals, but only Processor 1 could respond to peripheral interrupts.[4]When a job on Processor 2 required an operating system service it would be rescheduled on Processor 1, which was responsible for both initiating I/O processor activity and responding to interrupts indicating completion. In practice, this meant that while user jobs could run on either Processor 1 or Processor 2 and could access intrinsic library routines that didn't require kernel support, the operating system would schedule them on the latter whenever possible.[5] Control Data Corporation offered two configurations of itsCDC 6000 seriesthat featured twocentral processors. The CDC 6500[6]was a CDC 6400 with two central processors. The CDC 6700 was a CDC 6600 with the CDC 6400 central processor added to it. These systems were organized quite differently from the other multiprocessors in this article. The operating system ran on theperipheral processors, while the user's application ran on the CPUs. Thus, the terms ASMP and SMP do not properly apply to these multiprocessors. Digital Equipment Corporation(DEC) offered a dual-processor version of itsDECsystem-1050which used two KA10 processors; all peripherals were attached to one processor, the primary processor, and the primary processor ran the operating system code.[7]This offering was extended to the KL-10 and KS-10 processors in the PDP-10 line; in those systems, the boot CPU is designated the "policy CPU", which runs the command interpreter, swaps jobs in and out of memory, and performs a few other functions; other operating system functions, and I/O, can be performed by any of the processors, and if the policy processor fails, another processor takes over as the policy processor.[8] Digital Equipment Corporationdeveloped, but never released, a multiprocessorPDP-11, the PDP-11/74,[9]running a multiprocessor version ofRSX-11M.[10]In that system, either processor could run operating system code, and could perform I/O, but not all peripherals were accessible to all processors; most peripherals were attached to one or the other of the CPUs, so that a processor to which a peripheral wasn't attached would, when it needed to perform an I/O operation on that peripheral, request the processor to which the peripheral was attached to perform the operation.[10] DEC's first multi-processorVAXsystem, the VAX-11/782, was an asymmetric dual-processor system; only the first processor had access to the I/O devices.[11] Two options were available for theIBM System/370 Model 168for attaching a second processor.[12]One was the IBM 3062Attached Processing Unit, in which the second processor had no access to the channels, and was therefore similar to the B5000's Processor B or the second processor on a VAX-11/782. The other option offered a complete second CPU, and was thus more like the System/360 model 65MP.
https://en.wikipedia.org/wiki/Asymmetric_multiprocessing
Inalgebraic geometry, aderived schemeis ahomotopy-theoretic generalization of aschemein which classicalcommutative ringsare replaced with derived versions such asdifferential graded algebras, commutativesimplicial rings, orcommutative ring spectra. From the functor of points point-of-view, a derived scheme is a sheafXon the category of simplicial commutative rings which admits an open affine covering{Spec(Ai)→X}{\displaystyle \{Spec(A_{i})\to X\}}. From the locallyringed spacepoint-of-view, a derived scheme is a pair(X,O){\displaystyle (X,{\mathcal {O}})}consisting of atopological spaceXand asheafO{\displaystyle {\mathcal {O}}}either of simplicial commutative rings or ofcommutative ring spectra[1]onXsuch that (1) the pair(X,π0O){\displaystyle (X,\pi _{0}{\mathcal {O}})}is aschemeand (2)πkO{\displaystyle \pi _{k}{\mathcal {O}}}is aquasi-coherentπ0O{\displaystyle \pi _{0}{\mathcal {O}}}-module. Aderived stackis a stacky generalization of a derived scheme. Over a field of characteristic zero, the theory is closely related to that of a differential graded scheme.[2]By definition, adifferential graded schemeis obtained by gluing affine differential graded schemes, with respect toétale topology.[3]It was introduced byMaxim Kontsevich[4]"as the first approach to derived algebraic geometry."[5]and was developed further by Mikhail Kapranov and Ionut Ciocan-Fontanine. Just asaffinealgebraic geometry is equivalent (incategorical sense) to the theory ofcommutative rings(commonly calledcommutative algebra), affinederived algebraic geometryover characteristic zero is equivalent to the theory ofcommutative differential graded rings. One of the main example of derived schemes comes from the derived intersection of subschemes of a scheme, giving theKoszul complex. For example, letf1,…,fk∈C[x1,…,xn]=R{\displaystyle f_{1},\ldots ,f_{k}\in \mathbb {C} [x_{1},\ldots ,x_{n}]=R}, then we can get a derived scheme where is theétale spectrum.[citation needed]Since we can construct a resolution the derived ringR/(f1)⊗RL⋯⊗RLR/(fk){\displaystyle R/(f_{1})\otimes _{R}^{\mathbf {L} }\cdots \otimes _{R}^{\mathbf {L} }R/(f_{k})}, aderived tensor product, is the koszul complexKR(f1,…,fk){\displaystyle K_{R}(f_{1},\ldots ,f_{k})}. The truncation of this derived scheme to amplitude[−1,0]{\displaystyle [-1,0]}provides a classical model motivating derived algebraic geometry. Notice that if we have a projective scheme wheredeg⁡(fi)=di{\displaystyle \deg(f_{i})=d_{i}}we can construct the derived scheme(Pn,E∙,(f1,…,fk)){\displaystyle (\mathbb {P} ^{n},{\mathcal {E}}^{\bullet },(f_{1},\ldots ,f_{k}))}where with amplitude[−1,0]{\displaystyle [-1,0]} Let(A∙,d){\displaystyle (A_{\bullet },d)}be a fixed differential graded algebra defined over a field of characteristic0{\displaystyle 0}. Then aA∙{\displaystyle A_{\bullet }}-differential graded algebra(R∙,dR){\displaystyle (R_{\bullet },d_{R})}is calledsemi-freeif the following conditions hold: It turns out that everyA∙{\displaystyle A_{\bullet }}differential graded algebra admits a surjective quasi-isomorphism from a semi-free(A∙,d){\displaystyle (A_{\bullet },d)}differential graded algebra, called a semi-free resolution. These are unique up to homotopy equivalence in a suitablemodel category. The (relative)cotangent complexof an(A∙,d){\displaystyle (A_{\bullet },d)}-differential graded algebra(B∙,dB){\displaystyle (B_{\bullet },d_{B})}can be constructed using a semi-free resolution(R∙,dR)→(B∙,dB){\displaystyle (R_{\bullet },d_{R})\to (B_{\bullet },d_{B})}: it is defined as Many examples can be constructed by taking the algebraB{\displaystyle B}representing a variety over a field of characteristic 0, finding a presentation ofR{\displaystyle R}as a quotient of a polynomial algebra and taking the Koszul complex associated to this presentation. The Koszul complex acts as a semi-free resolution of the differential graded algebra(B∙,0){\displaystyle (B_{\bullet },0)}whereB∙{\displaystyle B_{\bullet }}is the graded algebra with the non-trivial graded piece in degree 0. Thecotangent complexof a hypersurfaceX=V(f)⊂ACn{\displaystyle X=\mathbb {V} (f)\subset \mathbb {A} _{\mathbb {C} }^{n}}can easily be computed: since we have the dgaKR(f){\displaystyle K_{R}(f)}representing thederived enhancementofX{\displaystyle X}, we can compute the cotangent complex as whereΦ(gds)=g⋅df{\displaystyle \Phi (gds)=g\cdot df}andd{\displaystyle d}is the usual universal derivation. If we take a complete intersection, then the koszul complex is quasi-isomorphic to the complex This implies we can construct the cotangent complex of the derived ringR∙{\displaystyle R^{\bullet }}as the tensor product of the cotangent complex above for eachfi{\displaystyle f_{i}}. Please note that the cotangent complex in the context of derived geometry differs from the cotangent complex of classical schemes. Namely, if there was a singularity in the hypersurface defined byf{\displaystyle f}then the cotangent complex would have infinite amplitude. These observations provide motivation for thehidden smoothnessphilosophy of derived geometry since we are now working with a complex of finite length. Given a polynomial functionf:An→Am,{\displaystyle f:\mathbb {A} ^{n}\to \mathbb {A} ^{m},}then consider the (homotopy) pullback diagram where the bottom arrow is the inclusion of a point at the origin. Then, the derived schemeZ{\displaystyle Z}has tangent complex atx∈Z{\displaystyle x\in Z}is given by the morphism where the complex is of amplitude[−1,0]{\displaystyle [-1,0]}. Notice that the tangent space can be recovered usingH0{\displaystyle H^{0}}and theH−1{\displaystyle H^{-1}}measures how far awayx∈Z{\displaystyle x\in Z}is from being a smooth point. Given a stack[X/G]{\displaystyle [X/G]}there is a nice description for the tangent complex: If the morphism is not injective, theH−1{\displaystyle H^{-1}}measures again how singular the space is. In addition, the Euler characteristic of this complex yields the correct (virtual) dimension of the quotient stack. In particular, if we look at the moduli stack of principalG{\displaystyle G}-bundles, then the tangent complex is justg[+1]{\displaystyle {\mathfrak {g}}[+1]}. Derived schemes can be used for analyzing topological properties of affine varieties. For example, consider a smooth affine varietyM⊂An{\displaystyle M\subset \mathbb {A} ^{n}}. If we take a regular functionf:M→C{\displaystyle f:M\to \mathbb {C} }and consider the section ofΩM{\displaystyle \Omega _{M}} Then, we can take the derived pullback diagram where0{\displaystyle 0}is the zero section, constructing aderived critical locusof the regular functionf{\displaystyle f}. Consider the affine variety and the regular function given byf(x,y)=x2+y3{\displaystyle f(x,y)=x^{2}+y^{3}}. Then, where we treat the last two coordinates asdx,dy{\displaystyle dx,dy}. The derived critical locus is then the derived scheme Note that since the left term in the derived intersection is a complete intersection, we can compute a complex representing the derived ring as whereKdx,dy∙(C[x,y,dx,dy]){\displaystyle K_{dx,dy}^{\bullet }(\mathbb {C} [x,y,dx,dy])}is the koszul complex. Consider a smooth functionf:M→C{\displaystyle f:M\to \mathbb {C} }whereM{\displaystyle M}is smooth. The derived enhancement ofCrit⁡(f){\displaystyle \operatorname {Crit} (f)}, thederived critical locus, is given by the differential graded scheme(M,A∙,Q){\displaystyle (M,{\mathcal {A}}^{\bullet },Q)}where the underlying graded ring are the polyvector fields and the differentialQ{\displaystyle Q}is defined by contraction bydf{\displaystyle df}. For example, if we have the complex representing the derived enhancement ofCrit⁡(f){\displaystyle \operatorname {Crit} (f)}.
https://en.wikipedia.org/wiki/Derived_scheme
TheChinese room argumentholds that a computer executing aprogramcannot have amind,understanding, orconsciousness,[a]regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopherJohn Searleentitled "Minds, Brains, and Programs" and published in the journalBehavioral and Brain Sciences.[1]Before Searle, similar arguments had been presented by figures includingGottfried Wilhelm Leibniz(1714),Anatoly Dneprov(1961), Lawrence Davis (1974) andNed Block(1978). Searle's version has been widely discussed in the years since.[2]The centerpiece of Searle's argument is athought experimentknown as theChinese room.[3] In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just followingsyntacticrules withoutsemanticcomprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.[4] The argument is directed against the philosophical positions offunctionalismandcomputationalism,[5]which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls thestrong AI hypothesis:[b]"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[c] Although its proponents originally presented the argument in reaction to statements ofartificial intelligence(AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.[6]The argument applies only to digital computers running programs and does not apply to machines in general.[4]While widely discussed, the argument has been subject to significant criticism and remains controversial amongphilosophers of mindand AI researchers.[7][8] Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine acceptsChinese charactersas input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.[4] The questions at issue are these: does the machine actuallyunderstandthe conversation, or is it justsimulatingthe ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just actingas ifit had a mind?[4] Now suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed theTuring testthis way, it follows that Searle would do so as well, simply by running the program by hand.[4] Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.[4] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.[4] Gottfried Leibnizmade a similar argument in 1714 againstmechanism(the idea that everything that makes up a human being could, in principle, be explained in mechanical terms. In other words, that a person, including their mind, is merely a very complex machine). Leibniz used the thought experiment of expanding the brain until it was the size of a mill.[9]Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes.[d] Peter Winchmade the same point in his bookThe Idea of a Social Science and its Relation to Philosophy(1958), where he provides an argument to show that "a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language" (p. 108). Soviet cyberneticistAnatoly Dneprovmade an essentially identical argument in 1961, in the form of the short story "The Game". In it, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them know.[10]The game was organized by a "Professor Zarubin" to answer the question "Can mathematical machines think?" Speaking through Zarubin, Dneprov writes "the only way to prove that machines can think is to turn yourself into a machine and examine your thinking process" and he concludes, as Searle does, "We've proven that even the most perfect simulation of machine thinking is not the thinking process itself." In 1974,Lawrence H. Davisimagined duplicating the brain using telephone lines and offices staffed by people, and in 1978Ned Blockenvisioned the entire population of China involved in such a brain simulation. This thought experiment is called theChina brain, also the "Chinese Nation" or the "Chinese Gym".[11] Searle's version appeared in his 1980 paper "Minds, Brains, and Programs", published inBehavioral and Brain Sciences.[1]It eventually became the journal's "most influential target article",[2]generating an enormous number of commentaries and responses in the ensuing decades, and Searle has continued to defend and refine the argument in many papers, popular articles and books. David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".[12] Most of the discussion consists of attempts to refute it. "The overwhelming majority", notesBehavioral and Brain ScienceseditorStevan Harnad,[e]"still think that the Chinese Room Argument is dead wrong".[13]The sheer volume of the literature that has grown up around it inspiredPat Hayesto comment that the field ofcognitive scienceought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[14] Searle's argument has become "something of a classic in cognitive science", according to Harnad.[13]Varol Akmanagrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[15] Although the Chinese Room argument was originally presented in reaction to the statements ofartificial intelligenceresearchers, philosophers have come to consider it as an important part of thephilosophy of mind. It is a challenge tofunctionalismand thecomputational theory of mind,[f]and is related to such questions as themind–body problem, theproblem of other minds, thesymbol groundingproblem, and thehard problem of consciousness.[a] Searle identified a philosophical position he calls "strong AI": The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[c] The definition depends on the distinction between simulating a mind and actually having one. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."[22] The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founderHerbert A. Simondeclared that "there are now in the world machines that think, that learn and create".[23]Simon, together withAllen NewellandCliff Shaw, after having completed the first program that could doformal reasoning(theLogic Theorist), claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind."[24]John Haugelandwrote that "AI wants only the genuine article:machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root,computers ourselves."[25] Searle also ascribes the following claims to advocates of strong AI: In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computerfunctionalism" (a term he attributes toDaniel Dennett).[5][30]Functionalism is a position in modernphilosophy of mindthat holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accuratelyrepresentfunctional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism. Stevan Harnadargues that Searle's depictions of strong AI can be reformulated as "recognizable tenets ofcomputationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."[31]Computationalism[i]is the position in the philosophy of mind which argues that the mind can be accurately described as aninformation-processingsystem. Each of the following, according to Harnad, is a "tenet" of computationalism:[34] Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whetherlarge language models(LLMs) likeChatGPTcan possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the "stochastic parrots" argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.[35] David Chalmerssuggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle's original claim that purely "syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.[36] Searle holds a philosophical position he calls "biological naturalism": that consciousness[a]and understanding require specific biological machinery that is found in brains. He writes "brains cause minds"[37]and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains".[37]Searle argues that this machinery (known inneuroscienceas the "neural correlates of consciousness") must have some causal powers that permit the human experience of consciousness.[38]Searle's belief in the existence of these powers has been criticized. Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".[4]Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur. Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to bothbehaviorismandfunctionalism(including "computer functionalism" or "strong AI").[39]Biological naturalism is similar toidentity theory(the position that mental states are "identical to" or "composed of" neurological events); however, Searle has specific technical objections to identity theory.[40][j]Searle's biological naturalism and strong AI are both opposed toCartesian dualism,[39]the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter".[26] Searle's original presentation emphasized understanding—that is,mental stateswithintentionality—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations, Searle has included consciousness as the real target of the argument.[5] Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.[41] David Chalmerswrites, "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.[42] Colin McGinnargues that the Chinese room provides strong evidence that thehard problem of consciousnessis fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some cleversimulationinhabits the room.[43] Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese. In Searle's words, "the computer has nothing more than I have in the case where I understand nothing".[44] Patrick Hew used the Chinese Room argument to deduce requirements from militarycommand and controlsystems if they are to preserve a commander'smoral agency. He drew an analogy between a commander in theircommand centerand the person in the Chinese Room, and analyzed it under a reading ofAristotle's notions of "compulsory" and "ignorance". Information could be "down converted" from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from theUSSVincennesincident.[45] The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields.[6]However, several concepts developed by computer scientists are essential to understanding the argument, includingsymbol processing,Turing machines,Turing completeness, and the Turing test. Searle's arguments are not usually considered an issue for AI research. The primary mission of artificial intelligence research is only to create useful systems that act intelligently and it does not matter if the intelligence is "merely" a simulation. AI researchersStuart J. RussellandPeter Norvigwrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[6] Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do. Searle's "strong AI hypothesis" should not be confused with "strong AI" as defined byRay Kurzweiland other futurists,[46][21]who use the term to describe machine intelligence that rivals or exceeds human intelligence—that is,artificial general intelligence,human level AIorsuperintelligence. Kurzweil is referring primarily to theamountof intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that a superintelligent machine would not necessarily have a mind and consciousness. The Chinese room implements a version of the Turing test.[48]Alan Turingintroduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[48] To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would. Computers manipulate physical objects in order to carry out calculations and do simulations. AI researchersAllen NewellandHerbert A. Simoncalled this kind of machine aphysical symbol system. It is also equivalent to theformal systemsused in the field ofmathematical logic. Searle emphasizes the fact that this kind of symbol manipulation issyntactic(borrowing a term from the study ofgrammar). The computer manipulates the symbols using a form of syntax, without any knowledge of the symbol'ssemantics(that is, theirmeaning). Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today,artificial general intelligence. They framed this as a philosophical position, thephysical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."[49][50]The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind. Twenty-first century AI programs (such as "deep learning") do mathematical operations on huge matrixes of unidentified numbers and bear little resemblance to the symbolic processing used by AI programs at the time Searle wrote his critique in 1980.Nils Nilssondescribes systems like these as "dynamic" rather than "symbolic". Nilsson notes that these are essentially digitized representations of dynamic systems—the individual numbers do not have a specific semantics, but are insteadsamplesordata pointsfrom a dynamic signal, and it is the signal being approximated which would have semantics. Nilsson argues it is not reasonable to consider these signals as "symbol processing" in the same sense as the physical symbol systems hypothesis.[51] The Chinese room has a design analogous to that of a modern computer. It has aVon Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a machine that follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known intheoretical computer scienceas "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Turing writes, "all digital computers are in a sense equivalent."[52]The widely acceptedChurch–Turing thesisholds that any function computable by an effective procedure is computable by a Turing machine. The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according toStevan Harnad, are "no refutation (but rather an affirmation)"[53]of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.[28] There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.[54] Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990.[55][k]The Chinese room thought experiment is intended to prove point A3.[l] He begins with three axioms: Searle posits that these lead directly to this conclusion: This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is thecomputational theory of mindcorrect?[f]He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds: Searle claims that we can derive "immediately" and "trivially"[56]that: And from this he derives the further conclusions: Refutations of Searle's argument take many different forms (see below). Computationalists and functionalists reject A3, arguing that "syntax" (as Searle describes it)canhave "semantics" if the syntax has the right functional structure. Eliminative materialists reject A2, arguing that minds don't actually have "semantics"—that thoughts and other mental phenomena are inherently meaningless but nevertheless function as if they had meaning. Replies to Searle's argument may be classified according to what they claim to show:[m] Some of the arguments (robot and brain simulation, for example) fall into multiple categories. These replies attempt to answer the question: since the man in the room does not speak Chinese, where is the mind that does? These replies address the keyontologicalissues ofmind versus bodyand simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply". The basic version of the system reply argues that it is the "whole system" that understands Chinese.[61][n]While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.[29] Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"[29]without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"[29]In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain. Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now "the system" and "the man" both describe exactly the same object.[29] Critics of Searle's response argue that the program has allowed the man to have two minds in one head.[who?]If we assume a "mind" is a form of information processing, then thetheory of computationcan account for two computations occurring at once, namely (1) the computation foruniversal programmability(which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program).[63]The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's.[64]However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption. More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies,[who?]the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "emergentproperty", or "a virtual mind". Marvin Minskysuggested a version of the system reply known as the "virtual mind reply".[o]The term "virtual" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky that a computer may contain a "mind" that is virtual in the same sense asvirtual machines,virtual communitiesandvirtual reality. To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".[68] Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[69]Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the physical attributes of the device do not matter."[70]The question is, is the human mind like the pocket calculator, essentially composed of information, where a perfect simulation of the thing justisthe thing? Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term "synthetic intelligence" is more appropriate than the common description of such intelligences as "artificial." These replies provide an explanation of exactly who it is that understands Chinese. If there is somethingbesidesthe man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.[p] These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese"[29]and thus is dodging the question or hopelessly circular. As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns aboutintentionality,symbol groundingandsyntaxvs.semantics. Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causalconnection" between the symbols and things they represent.[72][q]Hans Moraveccomments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[74][r] Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn'tseewhat comes into the robot's eyes."[76] Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed theknowledge basein his file cabinet. The symbols Searle manipulates are already meaningful, they are just not meaningful to him.[77][s] Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.[t] Some have argued that the meanings of the symbols would come from a vast "background" ofcommonsense knowledgeencoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.[75][u] Searle agrees that this background exists, but he does not agree that it can be built into programs.Hubert Dreyfushas also criticized the idea that the "background" can be represented symbolically.[80] To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."[81][v] However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors thatroboticistscan supply. These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.) Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.[83][w]This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."[26]Moreover, he argues: [I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.[85] What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections betweenaxonsanddendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.[86][x]It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious. In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.[88][y][z](SeeShip of Theseusfor a similar thought experiment.) These arguments (and the robot or common-sense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it. In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned.[ac]The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument. The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."[27]If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle. Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section). In fact, the room can just as easily be redesigned to weaken our intuitions.Ned Block'sBlockhead argument[94]suggests that the program could, in theory, be rewritten into a simplelookup tableofrulesof the form "if the user writesS, reply withPand goto X". At least in principle, any program can be rewritten (or "refactored") into this form, even a brain simulation.[ad]In the blockhead scenario, the entire mental state is hidden in the letter X, which represents amemory address—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be overly specific. Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."[95] The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires. Several critics believe that Searle's argument relies entirely on intuitions. Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[96]Daniel Dennettdescribes the Chinese room argument as a misleading "intuition pump"[97]and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the obvious conclusion from it."[97] Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", asDaniel Dennettexplains.[79] Many of these critiques emphasize speed and complexity of the human brain,[ae]which processes information at 100 billion operations per second (by some estimates).[99]Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions.[100]This brings the clarity of Searle's intuition into doubt. An especially vivid version of the speed and complexity reply is fromPaulandPatricia Churchland. They propose this analogous thought experiment: "Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according toMaxwell's theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!"[87]Churchland's point is that the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.[101] Stevan Harnadis critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make aphase transitioninto the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"[102][af] Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology".[29]The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness. Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose the reality and knowability of the mental."[105]The replies below question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), the epiphenoma replies question whether we can make any argument at all about something like consciousness which can not, by definition, be detected by any experiment, and the eliminative materialist reply argues that Searle's own personal consciousness does not "exist" in the sense that Searle thinks it does. The "Other Minds Reply" points out that Searle's argument is a version of theproblem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.[106][ag] Nils Nilssonwrites "If a program behavesas ifit were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behavingas ifhe were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought."[108] Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.[109]He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."[110]TheTuring testsimply extends this "polite convention" to machines. He does not intend to solve the problem of other minds (for machines or people) and he does not think we need to.[ah] If we accept Searle's description of intentionality, consciousness, and the mind, we are forced to accept that consciousness isepiphenomenal: that it "casts no shadow" i.e. is undetectable in the outside world. Searle's "causal properties" cannot be detected by anyone outside the mind, otherwise the Chinese Room could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot detect the existence of the mental. Thus, Searle's "causal properties" and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter. Mike Aldercalls this the "Newton's Flaming Laser Sword Reply". He argues that the entire argument is frivolous, because it is non-verificationist: not only is the distinction betweensimulatinga mind andhavinga mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.[112] Daniel Dennett provides this illustration: suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. This is aphilosophical zombie, as formulated in thephilosophy of mind. This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually "zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.[113] Several philosophers argue that consciousness, as Searle describes it, does not exist.Daniel Dennettdescribes consciousness as a "user illusion".[114] This position is sometimes referred to aseliminative materialism: the view that consciousness is not a concept that can "enjoy reduction" to a strictly mechanical description, but rather is a concept that will be simplyeliminatedonce the way thematerialbrain works is fully understood, in just the same way as the concept of ademonhas already been eliminated from science rather than enjoying reduction to a strictly mechanical description. Other mental properties, such as original intentionality (also called "meaning", "content", and "semantic character"), are also commonly regarded as special properties related to beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that "minds have mental contents (semantics)" must be rejected.[115] Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers."[76]He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point. Margaret Bodenargued in her paper "Escaping from the Chinese Room" that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses. She then points out that the same applies to machine languages: a natural language sentence is understood by the programming language code that instantiates it, which in turn is understood by the lower-level compiler code, and so on. This implies that the distinction between syntax and semantics is not fixed, as Searle presupposes, but relative: the semantics of natural language is realized in the syntax of programming language; the semantics of programming language has a semantics that is realized in the syntax of compiler code. Searle's problem is a failure to assume a binary notion of understanding or not, rather than a graded one, where each system is stupider than the next.[116] Searle conclusion that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains"[26]have been sometimes described as a form of "Carbon chauvinism".[117]Steven Pinkersuggested that a response to that conclusion would be to make a counter thought experiment to the Chinese Room, where the incredulity goes the other way.[118]He brings as an example the short storyThey're Made Out of Meatwhich depicts an alien race composed of some electronic beings who upon finding Earth express disbelief that the meat brain of humans can experience consciousness and thought.[119] However, Searle himself denied being "Carbon chauvinist".[120]He said "I have not tried to show that only biological based systems like our brains can think. [...] I regard this issue as up for grabs".[121]He said that even silicon machines could theoretically have human-like consciousness and thought, if the actual physical–chemical properties of silicon could be used in a way that can produce consciousness and thought, but "until we know how the brain does it we are not in a position to try to do it artificially".[122]
https://en.wikipedia.org/wiki/Chinese_room
Online participationis used to describe the interaction between users and online communities on the web.Online communitiesoften involve members to provide content to the website or contribute in some way. Examples of such includewikis,blogs,online multiplayer games, and other types of social platforms. Online participation is currently a heavily researched field. It provides insight into fields such asweb design,online marketing,crowdsourcing, and many areas ofpsychology. Some subcategories that fall under online participation are: commitment toonline communities, coordination and interaction, and member recruitment. Some key examples of onlineknowledge sharinginfrastructures include the following: In the past important online knowledge sharing infrastructures included: Many online communities (e.g.Blogs,Chat rooms,Electronic mailing lists,Internet forums,Imageboards,Wikis), are not only knowledge-sharing resources but also fads. Studies have shown that committed members of online communities have reasons to remain active. As long as members feel the need to contribute, there is a mutual dependence between the community and the member. Although many researchers have come up with severalmotivational factorsbehind online contribution, these theories can all be categorized under instrinsic and extrinsic motivations.Intrinsic motivationrefers to an action that is driven by personal interests and internal emotions in the task itself whileextrinsic motivationrefers to an action that is influenced by external factors, often for a certain outcome, reward or recognition. The two types of motivation contradict each other but often go hand-in-hand in cases where continual contribution is observed. Several motivational factors lead people to continue their participation to these online communities and remain loyal.Peter Kollockresearched motivations for contributing to online communities. Kollock (1999, p. 227) outlines three motivations that do not rely on altruistic behavior on the part of the contributor: anticipated reciprocity; increased recognition; and sense of efficacy. Another motivation, in which Marc Smith mentions in his 1992 thesisVoices from the WELL: The Logic of the Virtual Commonsis "Communion"—a "sense of community" as it is referred to insocial psychology. In a simple sentence we can say it is made by people for the people. A person is motivated to contribute valuable information to the group in the expectation that one will receive useful help and information in return. Indeed, there is evidence that active participants in online communities get more responses faster to questions than unknown participants.[5]The higher the expectation of reciprocity, the greater the chance of there being high knowledge contribution intent in an online community. Reciprocity represents a sense of fairness where individuals usually reciprocate the positive feedback they receive from others so that they can in return get more useful knowledge from others in the future. Research has shown thatself esteemneeds of recognition from others lead to expectations ofreciprocity.[6]Self-esteem plays such an important role in the need for reciprocity because contributing to online communities can be an ego booster for many types of users. The more positive feedback contributors get from other members of their community, the closer they may feel to being considered an expert in the knowledge they are sharing. Because of this, contributing to online communities can lead to a sense of self-value and respect, based on the level of positive feedback reciprocated from the community In addition, there is evidence that active participants in online communities get more responses faster to questions than unknown participants.[5] A study on the participation ineBay'sreputation systemdemonstrated that the expectation of reciprocal behavior from partners increases participation from self-interested eBay buyers and sellers. Standard economic theory predicts that people are not inclined to contribute voluntarily to the provision of such public goods but, rather, they tend to free ride on the contributions of others.[7]Nevertheless, empirical results from eBay show that buyers submit ratings to more than 50% of transactions.[8][9]The main takeaways from their conclusion were that they found that experienced users tend to rate more frequently, and motivation for leaving comments is not strongly motivated by pure altruism targeted towards the specific transaction partner, but from self-interest and reciprocity to "warm glow" feeling of contribution. Some theories supportaltruismas being a key motivator in online participation and reciprocity. Although evidence from sociology, economics, political science, and social psychology shows that altruism is part of human nature, recent research reveals that the pure altruism model lacks predictive power in many situations. Several authors have proposed combining a "joy-of-giving" (sometimes also referred to as "warm glow") motive with altruism to create a model of impure altruism.[10][11]Different from altruism, reciprocity represents a pattern of behavior where people respond to friendly or hostile actions with similar actions even if no material gains are expected.[12] Voluntary participation in online feedback mechanisms seems to be largely motivated by self-interest. Because their reputation is on the line, the eBay study showed that some partners using eBay's feedback mechanism had selfish motivations to rate others. For example, data showed that some eBay users exhibited reciprocity towards partners who rated them first. This caused them to only rate partners with hopes the increase the probability of eliciting a reciprocal response.[13] Recognitionis important to online contributors such that, in general, individuals want recognition for their contributions. Some have called thisEgoboo. Kollock outlines the importance of reputation online: "Rheingold (1993)in his discussion ofthe WELL(an early online community) lists the desire for prestige as one of the key motivations of individuals' contributions to the group. To the extent this is the concern of an individual, contributions will likely be increased to the degree that the contribution is visible to the community as a whole and to the extent there is some recognition of the person's contributions. ... the powerful effects of seemingly trivial markers of recognition (e.g. being designated as an 'official helper') has been commented on in a number of online communities..." One of the key ingredients of encouraging areputationis to allow contributors to be known or not to beanonymous. The following example, fromMeyers (1989)harvtxt error: no target: CITEREFMeyers1989 (help)study of the computer underground illustrates the power of reputation. When involved in illegal activities, computer hackers must protect their personal identities with pseudonyms. If hackers use the same nicknames repeatedly, this can help the authorities to trace them. Nevertheless, hackers are reluctant to change their pseudonyms regularly because the status and fame associated with a particular nickname would be lost. On the importance ofonline identity: Profiles and reputation are clearly evident in online communities today.Amazon.comis a case in point, as all contributors are allowed to createprofilesabout themselves and as their contributions are measured by the community, their reputation increases.Myspace.comencourages elaborate profiles for members where they can share all kinds of information about themselves including what music they like, their heroes, etc. Displaying photos and information about individual members and their recent activities on social networking websites can promote bonds-based commitment. Because social interaction is the primary basis for building and maintaining social bonds, we can gain appreciation for other users once we interact with them.[14]This appreciation turns into increased recognition for the contributors, which would in turn give them the incentive to contribute more. In addition to this, many communities give incentives for contributing. For example, many forums award Members points for posting. Members can spend these points in a virtual store.eBayis an example of an online marketplace where reputation is very important because it is used to measure the trustworthiness of someone you potentially will do business with. This type of community is known as a reputation system, which is a type ofcollaborative filteringalgorithm which attempts to collect, distribute, and aggregate ratings about all users' past behavior within an online community in an effort to strike a balance between the democratic principles of open publishing and maintaining standards of quality.[15]These systems, like eBay's, promote the idea of trust that relates to expectations of reciprocity which can help increase the sense of reputation for each member. With eBay, you have the opportunity to rate your experience with someone and they, likewise, can rate you. This has an effect on the reputation score. The participants may therefore be encouraged tomanage their online identityin order tomake a good impressionon the other members of the community. Other successful online communities havereputation systemsthat do not exactly provide any concrete incentive. For example,Redditis an online social content-aggregation community which serves as a "front page of the Internet" and allows its users to submit content (e.g. text, photos, links, news-articles, blog-posts, music or videos) under sometimes ambiguous usernames. It features a reputation system by which users can rate the quality of submissions and comments. The total votecount of a users' submissions are not of any practical value—however when users feel that their content is generally appreciated by the rest of the Reddit-community (or its sub-communities called "subreddits") they may be motivated to contribute more. Individuals may contribute valuable information because the act results in asense of efficacy, that is, a sense that they are capable of achieving their desired outcome and have some effect on this environment. There is well-developed research literature that has shown how important a person's sense of efficacy is (e.g.Bandura1995). Studies have shown that increasing the user's sense of efficacy boosts theirintrinsic motivationand therefore makes them more likely to stay in an online community. According to Wang and Fesenmaier's research, efficacy is the biggest factor in affecting active contribution online. Of the many sub-factors, it was discovered that "satisfying other members' needs" is the biggest reason behind the increase of efficacy in a member followed by "being helpful to others" (Wang and Fesenmaier).[16]Features such as the task progress bars and an attempt to reduce some difficulty of completing a general task can easily enhance the feeling of self-worth in the community. "Creating immersive experiences with clear goals, feedback and challenge that exercise peoples' skills to the limits but still leave them in control causes the experiences to be intrinsically interesting. Positive but constructive and sincere feedbacks also produce similar effects and increase motivation to complete more tasks. A competitive setting—which may or may not have been intended to be competitive can also increase a person's self-esteem if quality performance is assumed" (Kraut 2012)[17]). People, in general, are social beings and are motivated by receiving direct responses to their contributions. Most online communities enable this by allowing people to reply back to others' contributions (e.g. manyBlogsallow comments from readers, one can reply back to forum posts, etc.). Granted, there is some overlap between improving one's reputation and gaining a sense of community, and it seems safe to say that there are also some overlapping areas between all four motivators. While some people are active contributors to online discussion, others joinvirtual communitiesand do not actively participate, a concept referred to aslurking(Preece 2009). There are several reasons why people choose not to participate online. For instance, users may get the information they wanted without actively participating, think they are helpful by not posting, want to learn more about the community before becoming an active member, be unable to use the software provided, or dislike the dynamics they observe within the group (Preece, Nonnecke & Andrews 2004). When online communities have lurking members, the amount of participation within the group decreases and the sense of community for these lurking members also diminishes. Online participation increases the sense of community for all members, as well as gives them a motivation to continue participating. Other problems regarding a sense of community arise when theonline communityattempts to attract and retain newcomers. These problems include difficulty of recruiting newcomers, making them stay committed early on, and controlling their possible inappropriate behavior. If an online community is able to solve these problems with their newcomers, then they can increase thesense of communitywithin the entire group. A sense of community is also heightened in online communities when each person has a willingness to participate due tointrinsicandextrinsic motivations. Findings also show that newcomers may be unaware that an online social networking website even has a community. As these users build their own profiles and get used to the culture of the group over time, they eventually self-identify with the community and develop a sense of belonging to the community. Another motivation for participance may also come fromself-expressionby what is being shared in or created for online communities. Self-discovery may be another motivation[18]as many online-communities allow for feedback on personal beliefs, artistic creations, ideas and the like which may provide grounds to develop new perspectives on the self. Depending on the online-platform content being shared on them can be perceived by millions around the world which gives participants a certain influence which can serve as a motivation for participation. Additionally high participation may provide a user with special rights within a community (such asmodship) which can be inbuilt into the technical platform, be granted by the community (e.g. via voting) or certain users. Online-participation may be motivated by an instrumental purpose such as providing specific information.[18] The entertainment of playing or otherwise interacting with other users may be a major motivation for participants in certain communities.[18] Users ofsocial networkshave various reasons that motivate them to join particular networks. In general "communication technologies open up new pathways between individuals who would not otherwise connect".[19]The ability to have synchronous communication arrived with the development of online social networks.Facebookis one example of an online social network that people choose to openly participate in. Although there are a number of different social networking platforms available, there exists a large community of people who choose to actively engage on Facebook. Although Facebook is commonly known as a method of communication, there are a variety of reasons why users prefer to use Facebook, over other platforms, as their social networking platform. For some users, interactivity between themselves and other user is a matter of fidelity.[20] For many, it is important to maintain a sense of community. Through participation on online social networks it becomes easier for users to find and communicate with people within their community. Facebook often has friend recommendations based on the geography of the user.[21]This allows users to quickly connect with people in their area whom they may not see often, and stay in contact with them. For students, Facebook is an effective network to participate in when building and maintaining social capital.[22]By adding family, friends, acquaintances, and colleagues who use the network, students can expand their social capital. The online connections they make can later prove to be of benefit later on. Due to the competitive nature of the job market "[i]t is particularly important for university students to build social capital with the industry".[22]Since Facebook has a large number of active users it is easier for students to find out about opportunities relating to jobs through their friends online. Facebook's interface allows users to share content, such as status updates, photos, links, and keep in contact with people they may not be able to see on a day-to-day basis. The messenger application allows friends to have conversations privately from their other friends. Users can also create groups and events through Facebook in order to share information with specific people on the network. "Facebook encourages users to engage in self-promoting".[23]Facebook allows users to engage in self-promotion in a positive way; it allows friends to like and/or comment on posts and statuses. Facebook users are also able to "follow" people whom they may not be friends with, such as public figures, companies, or celebrities. This allows users to keep up to date with things that interest them like music, sports, and promotions from their favorite companies, and share them with their Facebook friends. Aside from features such as email, the photo album, and status updates, Facebook provides various additional features which help to individualize each users experience.[23]Some social networks have a specific interface that users cannot individualize to their specific interests, Facebook allows users to control certain preferences. Users can use "add-in functions (e.g., virtual pets, online games, the wall, virtual gifts) that facilitate users to customize their own interface on Facebook".[23] Studies have found that the nature and the level of participation in onlinesocial networkingsites have been directly correlated with thepersonalityof the participants. The Department of Psychology in theUniversity of Windsorsite their findings regarding this correlation in the articles"Personality and motivations associated with Facebook use"and"The Influence of Shyness on the Use of Facebook in an Undergraduate Sample". The articles state that people who have high levels of anxiety, stress, or shyness are more likely to favor socializing through the Internet than in-person socialization. The reason for this is because they are able to communicate with others without being face-to-face, and mediums such aschat roomsgive a sense of anonymity which make them feel more comfortable when participating in discussions with others. Studies also show that in order to increase online participation, the contributors must feel unique, useful, and be given challenging and specific goals. These findings fall in line with the social psychology theories ofsocial loafingand goal setting.Social loafingclaims that when people are involved in a group setting, they tend to not contribute as much and depend on the work of others. Goal setting is the theory stating that people will work harder if given a specific goal rather than a broad or general problem. However, other social psychology theories have been disproven to help with online participation. For instance, one study found that users will contribute more to an online group project than an individual one. Additionally, although users enjoy when their contributions are unique, they want a sense of similarity within the online community. Finding similarities with other members of a community encourage new users to participate more and become more active within the community. So, new users must be able to find and recognize similar users already participating in the community. Also, the online community must give a method of analyzing and quantifying the contribution made by any user to visualize their contributions to users and help convince them that they are unique and useful. However, these and otherpsychological motivationsbehind online participation are still being researched today. Research has shown that social characteristics, such associoeconomic status, gender, and age affect users' propensity to participate online. Following sociological research on thedigital divide, newer studies indicate a participation divide in the United States (Correa 2010)(Hargittai & Walejko 2008)(Schradie 2011) and the United Kingdom (Blank 2013). Age is the strongest demographic predictor of online participation, while gender differentiates forms of online participation. The effect of socioeconomic status is not found to be strong in all studies (Correa 2010) and (partly) mediated through online skills (Hargittai & Walejko 2008) andself-efficacy. Furthermore, existing social science research on online participation has heavily focused on the political sphere, neglecting other areas, such as education, health or cultural participation (Lutz, Hoffmann & Meckel 2014). Online participation is relevant in different systems of thesocial websuch as: Nielsen's 90-9-1% rule: "In most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action". It is interesting to point out that the majority of the user population is in fact not contributing to the informational gain of online communities, which leads to the phenomenon of contribution inequality. Often, feedbacks, opinions and editorials are posted from those users who have stronger feelings towards the matter than most others; thus it is often the case that some posts online are not in fact representative of the entire population leading to what is called theSurvivorship bias. Therefore, it is important to ease the process of contribution as well as to promote quality contribution to address this concern. Lior ZalmansonandGal Oestreicher-Singershowed that participation in the social websites can help boost subscription and conversion rates in these websites.[24][25]
https://en.wikipedia.org/wiki/Online_participation
Inmathematics, acyclic orderis a way to arrange a set of objects in acircle.[nb]Unlike most structures inorder theory, a cyclic order is not modeled as abinary relation, such as "a<b". One does not say that east is "more clockwise" than west. Instead, a cyclic order is defined as aternary relation[a,b,c], meaning "aftera, one reachesbbeforec". For example, [June, October, February], but not [June, February, October], cf. picture. A ternary relation is called a cyclic order if it iscyclic, asymmetric, transitive, and connected. Dropping the "connected" requirement results in apartial cyclic order. Asetwith a cyclic order is called acyclically ordered setor simply acycle.[nb]Some familiar cycles are discrete, having only afinite numberofelements: there are sevendays of the week, fourcardinal directions, twelve notes in thechromatic scale, and three plays inrock-paper-scissors. In a finite cycle, each element has a "next element" and a "previous element". There are also cyclic orders with infinitely many elements, such as the orientedunit circlein the plane. Cyclic orders are closely related to the more familiarlinear orders, which arrange objects in aline. Any linear order can be bent into a circle, and any cyclic order can be cut at a point, resulting in a line. These operations, along with the related constructions of intervals and covering maps, mean that questions about cyclic orders can often be transformed into questions about linear orders. Cycles have more symmetries than linear orders, and they often naturally occur as residues of linear structures, as in thefinite cyclic groupsor thereal projective line. A cyclic order on a setXwithnelements is like an arrangement ofXon a clock face, for ann-hour clock. Each elementxinXhas a "next element" and a "previous element", and taking either successors or predecessors cycles exactly once through the elements asx(1),x(2), ...,x(n). There are a few equivalent ways to state this definition. A cyclic order onXis the same as apermutationthat makes all ofXinto a singlecycle, which is a special type of permutation -a circular permutation. Alternatively, a cycle withnelements is also aZn-torsor: a set with a free transitiveactionby afinite cyclic group.[1]Another formulation is to makeXinto the standarddirected cycle graphonnvertices, by some matching of elements to vertices. It can be instinctive to use cyclic orders forsymmetric functions, for example as in where writing the finalmonomialasxzwould distract from the pattern. A substantial use of cyclic orders is in the determination of theconjugacy classesoffree groups. Two elementsgandhof the free groupFon a setYare conjugate if and only if, when they are written as products of elementsyandy−1withyinY, and then those products are put in cyclic order, the cyclic orders are equivalent under therewritingrules that allow one to remove or add adjacentyandy−1. A cyclic order on a setXcan be determined by a linear order onX, but not in a unique way. Choosing a linear order is equivalent to choosing a first element, so there are exactlynlinear orders that induce a given cyclic order. Since there aren!possible linear orders (as inpermutations), there are(n− 1)!possible cyclic orders (as incircular permutations). Aninfinite setcan also be ordered cyclically. Important examples of infinite cycles include theunit circle,S1, and therational numbers,Q. The basic idea is the same: we arrange elements of the set around a circle. However, in the infinite case we cannot rely upon an immediate successor relation, because points may not have successors. For example, given a point on the unit circle, there is no "next point". Nor can we rely upon a binary relation to determine which of two points comes "first". Traveling clockwise on a circle, neither east or west comes first, but each follows the other. Instead, we use a ternary relation denoting that elementsa,b,coccur after each other (not necessarily immediately) as we go around the circle. For example, in clockwise order, [east, south, west]. Bycurryingthe arguments of the ternary relation[a,b,c], one can think of a cyclic order as a one-parameter family of binary order relations, calledcuts, or as a two-parameter family of subsets ofK, calledintervals. The general definition is as follows: a cyclic order on a setXis a relationC⊂X3, written[a,b,c], that satisfies the following axioms:[nb] The axioms are named by analogy with theasymmetry,transitivity, andconnectednessaxioms for a binary relation, which together define astrict linear order.Edward Huntington(1916,1924) considered other possible lists of axioms, including one list that was meant to emphasize the similarity between a cyclic order and abetweenness relation. A ternary relation that satisfies the first three axioms, but not necessarily the axiom of totality, is apartial cyclic order. Given a linear order<on a setX, the cyclic order onXinduced by<is defined as follows:[2] Two linear orders induce the same cyclic order if they can be transformed into each other by a cyclic rearrangement, as incutting a deck of cards.[3]One may define a cyclic order relation as a ternary relation that is induced by a strict linear order as above.[4] Cutting a single point out of a cyclic order leaves a linear order behind. More precisely, given a cyclically ordered set(K,[⋅,⋅,⋅]){\displaystyle (K,[\cdot ,\cdot ,\cdot ])}, each elementa∈K{\displaystyle a\in K}defines a natural linear order<a{\displaystyle <_{a}}on the remainder of the set,K∖{a}{\displaystyle K\setminus \{a\}}, by the following rule:[5] Moreover,<a{\displaystyle <_{a}}can be extended by adjoininga{\displaystyle a}as a least element; the resulting linear order onK{\displaystyle K}is called the principal cut with least elementa{\displaystyle a}. Likewise, adjoininga{\displaystyle a}as a greatest element results in a cut<a{\displaystyle <^{a}}.[6] Given two elementsa≠b∈K{\displaystyle a\neq b\in K}, theopen intervalfroma{\displaystyle a}tob{\displaystyle b}, written(a,b){\displaystyle (a,b)}, is the set of allx∈K{\displaystyle x\in K}such that[a,x,b]{\displaystyle [a,x,b]}. The system of open intervals completely defines the cyclic order and can be used as an alternate definition of a cyclic order relation.[7] An interval(a,b){\displaystyle (a,b)}has a natural linear order given by<a{\displaystyle <_{a}}. One can define half-closed and closed intervals[a,b){\displaystyle [a,b)},(a,b]{\displaystyle (a,b]}, and[a,b]{\displaystyle [a,b]}by adjoininga{\displaystyle a}as aleast elementand/orb{\displaystyle b}as agreatest element.[8]As a special case, the open interval(a,a){\displaystyle (a,a)}is defined as the cutK∖a{\displaystyle K\setminus a}. More generally, a proper subsetS{\displaystyle S}ofK{\displaystyle K}is calledconvexif it contains an interval between every pair of points: fora≠b∈S{\displaystyle a\neq b\in S}, either(a,b){\displaystyle (a,b)}or(b,a){\displaystyle (b,a)}must also be inS{\displaystyle S}.[9]A convex set is linearly ordered by the cut<x{\displaystyle <_{x}}for anyx{\displaystyle x}not in the set; this ordering is independent of the choice ofx{\displaystyle x}. As a circle has aclockwiseorder and a counterclockwise order, any set with a cyclic order has twosenses. Abijectionof the set that preserves the order is called anordered correspondence. If the sense is maintained as before, it is adirect correspondence, otherwise it is called anopposite correspondence.[10]Coxeter uses aseparation relationto describe cyclic order, and this relation is strong enough to distinguish the two senses of cyclic order. Theautomorphismsof a cyclically ordered set may be identified with C2, the two-element group, of direct and opposite correspondences. The "cyclic order = arranging in a circle" idea works because anysubsetof a cycle is itself a cycle. In order to use this idea to impose cyclic orders on sets that are not actually subsets of the unit circle in the plane, it is necessary to considerfunctionsbetween sets. A function between two cyclically ordered sets,f:X→Y, is called amonotonic functionor ahomomorphismif it pulls back the ordering onY: whenever[f(a),f(b),f(c)], one has[a,b,c]. Equivalently,fis monotone if whenever[a,b,c]andf(a),f(b), andf(c)are all distinct, then[f(a),f(b),f(c)]. A typical example of a monotone function is the following function on the cycle with 6 elements: A function is called anembeddingif it is both monotone andinjective.[nb]Equivalently, an embedding is a function that pushes forward the ordering onX: whenever[a,b,c], one has[f(a),f(b),f(c)]. As an important example, ifXis a subset of a cyclically ordered setY, andXis given its natural ordering, then theinclusion mapi:X→Yis an embedding. Generally, an injective functionffrom an unordered setXto a cycleYinduces a unique cyclic order onXthat makesfan embedding. A cyclic order on a finite setXcan be determined by an injection into the unit circle,X→S1. There are many possible functions that induce the same cyclic order—in fact, infinitely many. In order to quantify this redundancy, it takes a more complex combinatorial object than a simple number. Examining theconfiguration spaceof all such maps leads to the definition of an(n− 1)-dimensionalpolytopeknown as acyclohedron. Cyclohedra were first applied to the study ofknot invariants;[11]they have more recently been applied to the experimental detection ofperiodically expressedgenesin the study ofbiological clocks.[12] The category of homomorphisms of the standard finite cycles is called thecyclic category; it may be used to constructAlain Connes'cyclic homology. One may define a degree of a function between cycles, analogous to thedegree of a continuous mapping. For example, the natural map from thecircle of fifthsto thechromatic circleis a map of degree 7. One may also define arotation number. The set of all cuts is cyclically ordered by the following relation:[<1, <2, <3]if and only if there existx,y,zsuch that:[17] A certain subset of this cycle of cuts is theDedekind completionof the original cycle. Starting from a cyclically ordered setK, one may form a linear order by unrolling it along an infinite line. This captures the intuitive notion of keeping track of how many times one goes around the circle. Formally, one defines a linear order on theCartesian productZ×K, whereZis the set ofintegers, by fixing an elementaand requiring that for alli:[18] For example, the months January 2025, May 2025, September 2025, and January 2026 occur in that order. This ordering ofZ×Kis called theuniversal coverofK.[nb]Itsorder typeis independent of the choice ofa, but the notation is not, since the integer coordinate "rolls over" ata. For example, although the cyclic order ofpitch classesis compatible with the A-to-G alphabetical order, C is chosen to be the first note in each octave, so innote-octavenotation, B3is followed by C4. The inverse construction starts with a linearly ordered set and coils it up into a cyclically ordered set. Given a linearly ordered setLand an order-preservingbijectionT:L→Lwith unbounded orbits, theorbit spaceL/Tis cyclically ordered by the requirement:[7][nb] In particular, one can recoverKby definingT(xi) =xi+1onZ×K. There are alson-fold coverings for finiten; in this case, one cyclically ordered set covers another cyclically ordered set. For example, the24-hour clockis a double cover of the12-hour clock. In geometry, thepencilofraysemanating from a point in the oriented plane is a double cover of the pencil of unorientedlinespassing through the same point.[19]These covering maps can be characterized by lifting them to the universal cover.[7] Given a cyclically ordered set(K, [ ])and a linearly ordered set(L, <), the (total) lexicographic product is a cyclic order on theproduct setK×L, defined by[(a,x), (b,y), (c,z)]if one of the following holds:[20] The lexicographic productK×Lglobally looks likeKand locally looks likeL; it can be thought of asKcopies ofL. This construction is sometimes used to characterize cyclically ordered groups.[21] One can also glue together different linearly ordered sets to form a circularly ordered set. For example, given two linearly ordered setsL1andL2, one may form a circle by joining them together at positive and negative infinity. A circular order on the disjoint unionL1∪L2∪ {−∞, ∞} is defined by∞ <L1< −∞ <L2< ∞, where the induced ordering onL1is the opposite of its original ordering. For example, the set of alllongitudesis circularly ordered by joining all points west and all points east, along with theprime meridianand the180th meridian.Kuhlmann, Marshall & Osiak (2011)use this construction while characterizing the spaces of orderings andreal placesof doubleformal Laurent seriesover areal closed field.[22] The open intervals form abasefor a naturaltopology, the cyclicorder topology. Theopen setsin this topology are exactly those sets which are open ineverycompatible linear order.[23]To illustrate the difference, in the set [0, 1), the subset [0, 1/2) is a neighborhood of 0 in the linear order but not in the cyclic order. Interesting examples of cyclically ordered spaces include the conformal boundary of asimply connectedLorentz surface[24]and theleaf spaceof a liftedessential laminationof certain 3-manifolds.[25]Discrete dynamical systemson cyclically ordered spaces have also been studied.[26] The interval topology forgets the original orientation of the cyclic order. This orientation can be restored by enriching the intervals with their induced linear orders; then one has a set covered with an atlas of linear orders that are compatible where they overlap. In other words, a cyclically ordered set can be thought of as a locally linearly ordered space: an object like amanifold, but with order relations instead of coordinate charts. This viewpoint makes it easier to be precise about such concepts as covering maps. The generalization to a locally partially ordered space is studied inRoll (1993); see alsoDirected topology. Acyclically ordered groupis a set with both agroup structureand a cyclic order, such that left and right multiplication both preserve the cyclic order. Cyclically ordered groups were first studied in depth byLadislav Riegerin 1947.[27]They are a generalization ofcyclic groups: theinfinite cyclic groupZand thefinite cyclic groupsZ/n. Since a linear order induces a cyclic order, cyclically ordered groups are also a generalization oflinearly ordered groups: therational numbersQ, the real numbersR, and so on. Some of the most important cyclically ordered groups fall into neither previous category: thecircle groupTand its subgroups, such as thesubgroup of rational points. Every cyclically ordered group can be expressed as a quotientL/Z, whereLis a linearly ordered group andZis a cyclic cofinal subgroup ofL. Every cyclically ordered group can also be expressed as a subgroup of a productT×L, whereLis a linearly ordered group. If a cyclically ordered group is Archimedean or compact, it can be embedded inTitself.[28] Apartial cyclic orderis a ternary relation that generalizes a (total) cyclic order in the same way that apartial ordergeneralizes atotal order. It is cyclic, asymmetric, and transitive, but it need not be total. Anorder varietyis a partial cyclic order that satisfies an additionalspreadingaxiom.[29]Replacing the asymmetry axiom with a complementary version results in the definition of aco-cyclic order. Appropriately total co-cyclic orders are related to cyclic orders in the same way that≤is related to<. A cyclic order obeys a relatively strong 4-point transitivity axiom. One structure that weakens this axiom is aCC system: a ternary relation that is cyclic, asymmetric, and total, but generally not transitive. Instead, a CC system must obey a 5-point transitivity axiom and a newinteriorityaxiom, which constrains the 4-point configurations that violate cyclic transitivity.[30] A cyclic order is required to be symmetric under cyclic permutation,[a,b,c] ⇒ [b,c,a], and asymmetric under reversal:[a,b,c] ⇒ ¬[c,b,a]. A ternary relation that isasymmetricunder cyclic permutation andsymmetricunder reversal, together with appropriate versions of the transitivity and totality axioms, is called abetweenness relation. Aquaternary relationcalledpoint-pair separationdistinguishes the two intervals that a point-pair determines on a circle. The relationship between a circular order and a point-pair separation is analogous to the relationship between a linear order and a betweenness relation.[31] Evans, Macpherson & Ivanov (1997)provide a model-theoretic description of the covering maps of cycles. Tararin (2001,2002) studies groups of automorphisms of cycles with varioustransitivityproperties.Giraudet & Holland (2002)characterize cycles whose full automorphism groups actfreely and transitively.Campero-Arena & Truss (2009)characterizecountablecoloredcycles whose automorphism groups act transitively.Truss (2009)studies the automorphism group of the unique (up to isomorphism) countable dense cycle. Kulpeshov & Macpherson (2005)studyminimalityconditions on circularly orderedstructures, i.e. models of first-order languages that include a cyclic order relation. These conditions are analogues ofo-minimalityandweak o-minimalityfor the case of linearly ordered structures. Kulpeshov (2006,2009) continues with some characterizations ofω-categoricalstructures.[32] Hans Freudenthalhas emphasized the role of cyclic orders in cognitive development, as a contrast toJean Piagetwho addresses only linear orders. Some experiments have been performed to investigate the mental representations of cyclically ordered sets, such as the months of the year. ^cyclic orderThe relation may be called acyclic order(Huntington 1916, p. 630), acircular order(Huntington 1916, p. 630), acyclic ordering(Kok 1973, p. 6), or acircular ordering(Mosher 1996, p. 109). Some authors call such an ordering atotal cyclic order(Isli & Cohn 1998, p. 643), acomplete cyclic order(Novák 1982, p. 462), alinear cyclic order(Novák 1984, p. 323), or anl-cyclic orderor ℓ-cyclic order(Černák 2001, p. 32), to distinguish from the broader class ofpartial cyclic orders, which they call simplycyclic orders. Finally, some authors may takecyclic orderto mean an unoriented quaternaryseparation relation(Bowditch 1998, p. 155). ^cycleA set with a cyclic order may be called acycle(Novák 1982, p. 462) or acircle(Giraudet & Holland 2002, p. 1). The above variations also appear in adjective form:cyclically ordered set(cyklicky uspořádané množiny,Čech 1936, p. 23),circularly ordered set,total cyclically ordered set,complete cyclically ordered set,linearly cyclically ordered set,l-cyclically ordered set, ℓ-cyclically ordered set. All authors agree that a cycle is totally ordered. ^ternary relationThere are a few different symbols in use for a cyclic relation.Huntington (1916, p. 630) uses concatenation:ABC.Čech (1936, p. 23) and (Novák 1982, p. 462) use ordered triples and the set membership symbol:(a,b,c) ∈C.Megiddo (1976, p. 274) uses concatenation and set membership:abc∈C, understandingabcas a cyclically ordered triple. The literature on groups, such asŚwierczkowski (1959a, p. 162) andČernák & Jakubík (1987, p. 157), tend to use square brackets:[a,b,c].Giraudet & Holland (2002, p. 1) use round parentheses:(a,b,c), reserving square brackets for a betweenness relation.Campero-Arena & Truss (2009, p. 1) use a function-style notation:R(a,b,c).Rieger (1947), cited afterPecinová 2008, p. 82) uses a "less-than" symbol as a delimiter:<x,y,z<. Some authors use infix notation:a<b<c, with the understanding that this does not carry the usual meaning ofa<bandb<cfor some binary relation < (Černy 1978, p. 262).Weinstein (1996, p. 81) emphasizes the cyclic nature by repeating an element:p↪r↪q↪p. ^embeddingNovák (1984, p. 332) calls an embedding an "isomorphic embedding". ^rollIn this case,Giraudet & Holland (2002, p. 2) write thatKisL"rolled up". ^orbit spaceThe mapTis calledarchimedeanbyBowditch (2004, p. 33),coterminalbyCampero-Arena & Truss (2009, p. 582), and atranslationbyMcMullen (2009, p. 10). ^universal coverMcMullen (2009, p. 10) callsZ×Kthe "universal cover" ofK.Giraudet & Holland (2002, p. 3) write thatKisZ×K"coiled".Freudenthal & Bauer (1974, p. 10) callZ×Kthe "∞-times covering" ofK. Often this construction is written as the anti-lexicographic order onK×Z.
https://en.wikipedia.org/wiki/Cyclic_order
XGBoost[2](eXtreme Gradient Boosting) is anopen-sourcesoftware librarywhich provides aregularizinggradient boostingframework forC++,Java,Python,[3]R,[4]Julia,[5]Perl,[6]andScala. It works onLinux,Microsoft Windows,[7]andmacOS.[8]From the project description, it aims to provide a "Scalable, Portable and Distributed Gradient Boosting (GBM, GBRT, GBDT) Library". It runs on a single machine, as well as the distributed processing frameworksApache Hadoop,Apache Spark,Apache Flink, andDask.[9][10] XGBoost gained much popularity and attention in the mid-2010s as the algorithm of choice for many winning teams ofmachine learningcompetitions.[11] XGBoost initially started as a research project by Tianqi Chen[12]as part of the Distributed (Deep) Machine Learning Community (DMLC) group at theUniversity of Washington. Initially, it began as a terminal application which could be configured using alibsvmconfiguration file. It became well known in the ML competition circles after its use in the winning solution of theHiggs Machine Learning Challenge. Soon after, the Python and R packages were built, and XGBoost now has package implementations for Java,Scala, Julia,Perl, and other languages. This brought the library to more developers and contributed to its popularity among theKagglecommunity, where it has been used for a large number of competitions.[11] It was soon integrated with a number of other packages making it easier to use in their respective communities. It has now been integrated withscikit-learnforPythonusers and with the caret package forRusers. It can also be integrated into Data Flow frameworks likeApache Spark,Apache Hadoop, andApache Flinkusing the abstracted Rabit[13]and XGBoost4J.[14]XGBoost is also available onOpenCLforFPGAs.[15]An efficient, scalable implementation of XGBoost has been published by Tianqi Chen andCarlos Guestrin.[16] While the XGBoost model often achieves higher accuracy than a single decision tree, it sacrifices the intrinsic interpretability of decision trees.  For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder. Salient features of XGBoost which make it different from other gradient boosting algorithms include:[17][18][16] XGBoost works asNewton–Raphsonin function space unlikegradient boostingthat works as gradient descent in function space, a second orderTaylor approximationis used in the loss function to make the connection to Newton–Raphson method. A generic unregularized XGBoost algorithm is: Input: training set{(xi,yi)}i=1N{\displaystyle \{(x_{i},y_{i})\}_{i=1}^{N}}, a differentiable loss functionL(y,F(x)){\displaystyle L(y,F(x))}, a number of weak learnersM{\displaystyle M}and a learning rateα{\displaystyle \alpha }. Algorithm:
https://en.wikipedia.org/wiki/XGBoost
Thedead Internet theoryis aconspiracy theorythat asserts, due to a coordinated and intentional effort, theInternetnow consists mainly ofbot activityandautomatically generated contentmanipulated byalgorithmic curationto control the population and minimize organic human activity.[1][2][3][4][5]Proponents of the theory believe thesesocial botswere created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers.[6][7]Some proponents of the theory accuse government agencies of using bots to manipulate public perception.[2][6]The date given for this "death" is generally around 2016 or 2017.[2][8][9]The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.[2][4][10] The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a post titled "Dead Internet Theory: Most Of The Internet Is Fake" was published onto the forumAgora Road's Macintosh Cafe esoteric board by a user named "IlluminatiPirate",[11]claiming to be building on previous posts from the same board and fromWizardchan,[2]and marking the term's spread beyond these initial imageboards.[2][12]The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels.[2]It gained more mainstream attention with an article inThe Atlantictitled "Maybe You Missed It, but the Internet 'Died' Five Years Ago".[2]This article has been widely cited by other articles on the topic.[13][12] The dead Internet theory has two main components: that organic human activity on the web has been displaced by bots and algorithmically curated search results, and that state actors are doing this in a coordinated effort to manipulate the human population.[3][14][15]The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" inNew Yorkmagazine.[2][16][14]The Dead Internet Theory goes on to include thatGoogle, and other search engines, are censoring the Web by filtering content that is not desirable by limiting what is indexed and presented in search results.[3]While Google may suggest that there are millions of search results for a query, the results available to a user do not reflect that.[3]This problem is exacerbated by the phenomenon known aslink rot, which is caused when content at a website becomes unavailable, and all links to it on other sites break.[3]This has led to the theory that Google is aPotemkin village, and the searchable Web is much smaller than we are led to believe.[3]The Dead Internet Theory suggests that this is part of the conspiracy to limit users to curated, and potentially artificial, content online. The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally limiting users to curated, and potentially artificial AI-generated content, to manipulate the human population for a variety of reasons.[2][14][15][3]In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence-poweredgaslightingof the entire world population."[2][6] Caroline Busta, founder of the media platformNew Models, was quoted in a 2021 article inThe Atlanticcalling much of the dead Internet theory a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea.”[2]In an article inThe New Atlantis,Robert Mariani called the theory a mix between a genuine conspiracy theory and acreepypasta.[6] In 2024, the dead Internet theory was sometimes used to refer to the observable increase in content generated vialarge language models(LLMs) such asChatGPTappearing in popular Internet spaces without mention of the full theory.[1][17][18][19]In a 2025 article byThomas Sommerer, this portion of the Dead Internet Theory is explored, with Sommerer calling the displacment of human generated content with Artificial content "an inevitable event."[18]Sommerer states the Dead Internet Theory is not scientific in nature, but reflects the public perception of the Internet.[18]Another article in theJournal of Cancer Educationdiscussed the impact of the perception of the Dead Internet Thoery in online cancer support forums, specifically focusing on the psycological impact on patience who find that support is coming from a LLM and not a genuine human.[19]The article also discussed the possible problems in training data for LLMs that could emerge from using AI generated content to train the LLMs.[19] Generative pre-trained transformers(GPTs) are a class oflarge language models(LLMs) that employartificial neural networksto produce human-like content.[20][21]The first of these to be well known was developed byOpenAI.[22]These models have created significant controversy. For example, Timothy Shoup of theCopenhagen Institute for Futures Studiessaid in 2022, "in the scenario whereGPT-3'gets loose', the internet would be completely unrecognizable".[23]He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030.[23]These predictions have been used as evidence for the dead internet theory.[13] In 2024,Googlereported that its search results were being inundated with websites that "feel like they were created for search engines instead of people".[24]In correspondence withGizmodo, a Google spokesperson acknowledged the role ofgenerative AIin the rapid proliferation of such content and that it could displace more valuable human-made alternatives.[25]Bots using LLMs are anticipated to increase the amount of spam, and run the risk of creating a situation where bots interacting with each other create "self-replicating prompts" that result in loops only human users could disrupt.[5] ChatGPTis an AIchatbotwhose late 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before.[8][26]Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models.[8][26]This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content.[8][26][27][5][28] In 2016, the security firmImpervareleased a report on bot traffic and found that automated programs were responsible for 52% of web traffic.[29][30]This report has been used as evidence in reports on the dead Internet theory.[2]Imperva's report for 2023 found that 49.6% of internet traffic was automated, a 2% rise on 2022 which was partly attributed to artificial intelligence modelsscraping the webfor training content.[31] In 2024, AI-generated images onFacebook, referred to as "AI slop", began going viral.[35][36]Subjects of these AI-generated images included various iterations ofJesus"meshed in various forms" with shrimp, flight attendants, and black children next to artwork they supposedly created. Many of those said iterations have hundreds or even thousands of AI comments that say "Amen".[37][38]These images have been referred as an example for why the Internet feels "dead".[39]Sommerer discussed Shrimp Jesus in detail within his article as a symbol to represent the shift in the Interent, specifically stating "Just as Jesus was supposedly the messenger for God, Shrimp Jesus is the messenger for the fatal system maneuvered ourselves into. Decoupled, proliferated, and in a state of exponential metastasis."[18] Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour.[40] In January 2025, interest renewed in the theory following statements from Meta on their plans to introduce new AI powered autonomous accounts.[41]Connor Hayes, vice-president of product for generative AI at Meta stated, "We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do...They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform."[42] In the past, theRedditwebsite allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction.[27]In 2023, the companymoved to charge for access to its user dataset. Companies training AI are expected to continue to use this data for training future AI.[citation needed]As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts.[27]ProfessorToby Walsh, a computer scientist at the University of New South Wales, said in an interview withBusiness Insiderthat training the next generation of AI on content created by previous generations could cause the content to suffer.[27]University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory.[27] Since 2020, severalTwitteraccounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me".[2]These posts received tens of thousands of likes, many of which are suspected to be frombot accounts. Proponents of the dead internet theory have used these accounts as an example.[2][12] The proportion of Twitter accounts run by bots became a major issue duringElon Musk's acquisition of the company.[44][45][46][47]Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots.[44][48]Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%.[44]CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots.[49]Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users.[48]Believers in the dead Internet theory have pointed to this incident as evidence.[50] In 2024,TikTokbegan discussing offering the use of virtual influencers to advertisement agencies.[15]In a 2024 article inFast Company, journalistMichael Grothauslinked this and other AI-generated content on social media to the Dead Internet Theory.[15]In this article, he referred to the content as "AI-slime".[15] OnYouTube, there is a market online for fake views to boost a video's credibility and reach broader audiences.[51]At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones.[51][2]YouTube engineers coined the term "the Inversion" to describe this phenomenon.[51][16][28]YouTube bots and the fear of "the Inversion" were cited as support for the dead Internet theory in a thread on the internet forum Melonland.[2] SocialAI, an app created on September 18, 2024 byMichael Sayman, was created with the full purpose of chatting with only AI bots without human interaction.[52]An article on theArs Technicawebsite linked SocialAI to the Dead Internet Theory.[52][53] The dead internet theory has been discussed among users of the social media platformTwitter. Users have noted that bot activity has affected their experience.[2]Numerous YouTube channels and online communities, including theLinus Tech Tipsforums andJoe Rogansubreddit, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.[2]There has also been discussion and memes about this topic on the appTikTok, due to the fact thatAIgenerated content has become more mainstream.[attribution needed]
https://en.wikipedia.org/wiki/Dead_internet_theory
Radiation hardeningis the process of makingelectronic componentsand circuits resistant to damage or malfunction caused by high levels ofionizing radiation(particle radiationand high-energyelectromagnetic radiation),[1]especially for environments inouter space(especially beyondlow Earth orbit), aroundnuclear reactorsandparticle accelerators, or duringnuclear accidentsornuclear warfare. Mostsemiconductor electronic componentsare susceptible to radiation damage, andradiation-hardened(rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the low demand and the extensive development and testing required to produce a radiation-tolerant design of amicroelectronicchip, the technology of radiation-hardened chips tends to lag behind the most recent developments.[2]They also typically cost more than their commercial counterparts.[2] Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs). Environments with high levels of ionizing radiation create special design challenges. A singlecharged particlecan knock thousands ofelectronsloose, causingelectronic noiseandsignal spikes. In the case ofdigital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design ofsatellites,spacecraft, futurequantum computers,[3][4][5]military aircraft, nuclear power stations, andnuclear weapons. In order to ensure the proper operation of such systems, manufacturers ofintegrated circuitsandsensorsintended for themilitaryoraerospacemarkets employ various methods of radiation hardening. The resulting systems are said to berad(iation)-hardened,rad-hard, or (within context)hardened. Typical sources of exposure of electronics to ionizing radiation are theVan Allen radiation beltsfor satellites, nuclear reactors in power plants for sensors and control circuits, particle accelerators for control electronics (particularlyparticle detectordevices), residual radiation fromisotopesinchip packaging materials,cosmic radiationfor spacecraft and high-altitude aircraft, andnuclear explosionsfor potentially all military and civilian electronics. Secondary particles result from interaction of other kinds of radiation with structures around the electronic devices. Two fundamental damage mechanisms take place: Lattice displacement is caused byneutrons, protons, alpha particles, heavy ions, and very high energygamma photons. They change the arrangement of the atoms in thecrystal lattice, creating lasting damage, and increasing the number ofrecombination centers, depleting theminority carriersand worsening the analog properties of the affected semiconductorjunctions. Counterintuitively, higher doses over a short time cause partialannealing("healing") of the damaged lattice, leading to a lower degree of damage than with the same doses delivered in low intensity over a long time (LDR or Low Dose Rate). This type of problem is particularly significant inbipolar transistors, which are dependent on minority carriers in their base regions; increased losses caused byrecombinationcause loss of the transistorgain(seeneutron effects). Components certified as ELDRS (Enhanced Low Dose Rate Sensitive)-free do not show damage with fluxes below 0.01 rad(Si)/s = 36 rad(Si)/h. Ionization effects are caused by charged particles, including ones with energy too low to cause lattice effects. The ionization effects are usually transient, creatingglitchesand soft errors, but can lead to destruction of the device if they trigger other damage mechanisms (e.g., alatchup).Photocurrentcaused byultravioletand X-ray radiation may belong to this category as well. Gradual accumulation ofholesin the oxide layer inMOSFETtransistors leads to worsening of their performance, up to device failure when the dose is high enough (seetotal ionizing dose effects). The effects can vary wildly depending on all the parameters – type of radiation, total dose and radiation flux, combination of types of radiation, and even the kind of device load (operating frequency, operating voltage, actual state of the transistor during the instant it is struck by the particle) – which makes thorough testing difficult, time-consuming, and requiring many test samples. The "end-user" effects can be characterized in several groups: A neutron interacting with a semiconductor lattice will displace the atoms in the lattice. This leads to an increase in the count of recombination centers anddeep-level defects, reducing the lifetime of minority carriers, thus affectingbipolar devicesmore thanCMOSones. Bipolar devices onsilicontend to show changes in electrical parameters at levels of 1010to 1011neutrons/cm2, while CMOS devices aren't affected until 1015neutrons/cm2. The sensitivity of devices may increase together with increasing level of integration and decreasing size of individual structures. There is also a risk of induced radioactivity caused byneutron activation, which is a major source of noise inhigh energy astrophysicsinstruments. Induced radiation, together with residual radiation from impurities in component materials, can cause all sorts of single-event problems during the device's lifetime.GaAsLEDs, common inoptocouplers, are very sensitive to neutrons. The lattice damage influences the frequency ofcrystal oscillators. Kinetic energy effects (namely lattice displacement) of charged particles belong here too. Total ionizing dose effects represent the cumulative damage of the semiconductor lattice (lattice displacementdamage) caused by exposure to ionizing radiation over time. It is measured inradsand causes slow gradual degradation of the device's performance. A total dose greater than 5000 rads delivered to silicon-based devices in a timespan on the order of seconds to minutes will cause long-term degradation. In CMOS devices, the radiation createselectron–hole pairsin the gate insulation layers, which cause photocurrents during their recombination, and the holes trapped in the lattice defects in the insulator create a persistent gatebiasingand influence the transistors'threshold voltage, making the N-type MOSFET transistors easier and the P-type ones more difficult to switch on. The accumulated charge can be high enough to keep the transistors permanently open (or closed), leading to device failure. Some self-healing takes place over time, but this effect is not too significant. This effect is the same ashot carrier degradationin high-integration high-speed electronics. Crystal oscillators are somewhat sensitive to radiation doses, which alter their frequency. The sensitivity can be greatly reduced by usingswept quartz. Naturalquartzcrystals are especially sensitive. Radiation performance curves for TID testing may be generated for all resultant effects testing procedures. These curves show performance trends throughout the TID test process and are included in the radiation test report. Transient dose effects result from a brief high-intensity pulse of radiation, typically occurring during a nuclear explosion. The high radiation flux creates photocurrents in the entire body of the semiconductor, causing transistors to randomly open, changing logical states offlip-flopsandmemory cells. Permanent damage may occur if the duration of the pulse is too long, or if the pulse causes junction damage or a latchup. Latchups are commonly caused by the X-rays and gamma radiation flash of a nuclear explosion. Crystal oscillators may stop oscillating for the duration of the flash due to promptphotoconductivityinduced in quartz. SGEMP effects are caused by the radiation flash traveling through the equipment and causing localionizationandelectric currentsin the material of the chips,circuit boards,electrical cablesand cases. Single-event effects (SEE) have been studied extensively since the 1970s.[9]When a high-energy particle travels through a semiconductor, it leaves anionizedtrack behind. This ionization may cause a highly localized effect similar to the transient dose one - a benign glitch in output, a less benign bit flip in memory or aregisteror, especially inhigh-power transistors, a destructive latchup and burnout. Single event effects have importance for electronics in satellites, aircraft, and other civilian and military aerospace applications. Sometimes, in circuits not involving latches, it is helpful to introduceRCtime constantcircuits that slow down the circuit's reaction time beyond the duration of an SEE. An SET happens when the charge collected from an ionization event discharges in the form of a spurious signal traveling through the circuit. This is de facto the effect of anelectrostatic discharge. it is considered a soft error, and is reversible. Single-event upsets(SEU) ortransient radiation effects in electronicsare state changes of memory or register bits caused by a single ion interacting with the chip. They do not cause lasting damage to the device, but may cause lasting problems to a system which cannot recover from such an error. It is otherwise a reversible soft error. In very sensitive devices, a single ion can cause amultiple-bit upset(MBU) in several adjacent memory cells. SEUs can becomesingle-event functional interrupts(SEFI) when they upset control circuits, such asstate machines, placing the device into an undefined state, atest mode, or a halt, which would then need aresetor apower cycleto recover. An SEL can occur in any chip with aparasitic PNPNstructure. A heavy ion or a high-energy proton passing through one of the two inner-transistor junctions can turn on thethyristor-like structure, which then stays "shorted" (an effect known aslatch-up) until the device is power-cycled. As the effect can happen between the power source and substrate, destructively high current can be involved and the part may fail. This is a hard error, and is irreversible. Bulk CMOS devices are most susceptible. A single-event snapback is similar to an SEL but not requiring the PNPN structure, and can be induced in N-channel MOS transistors switching large currents, when an ion hits near the drain junction and causesavalanche multiplicationof thecharge carriers. The transistor then opens and stays opened, a hard error which is irreversible. An SEB may occur in power MOSFETs when the substrate right under the source region gets forward-biased and the drain-source voltage is higher than the breakdown voltage of the parasitic structures. The resulting high current and local overheating then may destroy the device. This is a hard error, and is irreversible. SEGR are observed in power MOSFETs when a heavy ion hits the gate region while a high voltage is applied to the gate. A local breakdown then happens in the insulating layer ofsilicon dioxide, causing local overheating and destruction (looking like a microscopicexplosion) of the gate region. It can occur even inEEPROMcells during write or erase, when the cells are subjected to a comparatively high voltage. This is a hard error, and is irreversible. While proton beams are widely used for SEE testing due to availability, at lower energies proton irradiation can often underestimate SEE susceptibility. Furthermore, proton beams expose devices to risk of total ionizing dose (TID) failure which can cloud proton testing results or result in premature device failure. White neutron beams—ostensibly the most representative SEE test method—are usually derived from solid target-based sources, resulting in flux non-uniformity and small beam areas. White neutron beams also have some measure of uncertainty in their energy spectrum, often with high thermal neutron content. The disadvantages of both proton and spallation neutron sources can be avoided by using mono-energetic 14 MeV neutrons for SEE testing. A potential concern is that mono-energetic neutron-induced single event effects will not accurately represent the real-world effects of broad-spectrum atmospheric neutrons. However, recent studies have indicated that, to the contrary, mono-energetic neutrons—particularly 14 MeV neutrons—can be used to quite accurately understand SEE cross-sections in modern microelectronics.[10] Hardened chips are often manufactured oninsulatingsubstratesinstead of the usualsemiconductorwafers. Silicon on insulator (SOI) and silicon onsapphire(SOS) are commonly used. While normal commercial-grade chips can withstand between 50 and 100gray(5 and 10 krad), space-grade SOI and SOS chips can survive doses between 1000 and 3000gray(100 and 300 krad).[11][12]At one time many4000 serieschips were available in radiation-hardened versions (RadHard).[13]While SOI eliminates latchup events, TID and SEE hardness are not guaranteed to be improved.[14] Choosing a substrate with wideband gapgives it higher tolerance to deep-level defects; e.g.silicon carbideorgallium nitride.[citation needed] Use of a specialprocess nodeprovides increased radiation resistance.[15]Due to the high development costs of new radiation hardened processes, the smallest "true" rad-hard (RHBP, Rad-Hard By Process) process is 150 nm as of 2016, however, rad-hard 65 nm FPGAs were available that used some of the techniques used in "true" rad-hard processes (RHBD, Rad-Hard By Design).[16]As of 2019 110 nm rad-hard processes are available.[17] Bipolar integrated circuits generally have higher radiation tolerance than CMOS circuits. The low-power Schottky (LS)5400 seriescan withstand 1000 krad, and manyECL devicescan withstand 10,000 krad.[13]Usingedgeless CMOStransistors, which have an unconventional physical construction, together with an unconventional physical layout, can also be effective.[18] MagnetoresistiveRAM, orMRAM, is considered a likely candidate to provide radiation hardened, rewritable, non-volatile conductor memory. Physical principles and early tests suggest that MRAM is not susceptible to ionization-induced data loss.[19] Capacitor-basedDRAMis often replaced by more rugged (but larger, and more expensive)SRAM. SRAM cells have more transistors per cell than usual (which is 4T or 6T), which makes the cells more tolerant to SEUs at the cost of higher power consumption and size.[20][16] Shieldingthe package againstradioactivityis straightforward to reduce exposure of the bare device.[21] To protect against neutron radiation and theneutron activationof materials, it is possible to shield the chips themselves by use ofdepleted boron(consisting only of isotope boron-11) in theborophosphosilicate glasspassivation layerprotecting the chips, as naturally prevalent boron-10 readilycaptures neutronsand undergoesalpha decay(seesoft error). Error correcting code memory(ECC memory) uses redundant bits to check for and possibly correct corrupted data. Since radiation's effects damage the memory content even when the system is not accessing the RAM, a "scrubber" circuit must continuously sweep the RAM; reading out the data, checking the redundant bits for data errors, then writing back any corrections to the RAM. Redundantelements can be used at the system level. Three separatemicroprocessorboards may independently compute an answer to a calculation and compare their answers. Any system that produces a minority result will recalculate. Logic may be added such that if repeated errors occur from the same system, that board is shut down. Redundant elements may be used at the circuit level.[22]A single bit may be replaced with three bits and separate "voting logic" for each bit to continuously determine its result (triple modular redundancy). This increases area of a chip design by a factor of 5, so must be reserved for smaller designs. But it has the secondary advantage of also being "fail-safe" in real time. In the event of a single-bit failure (which may be unrelated to radiation), the voting logic will continue to produce the correct result without resorting to awatchdog timer. System level voting between three separate processor systems will generally need to use some circuit-level voting logic to perform the votes between the three processor systems. Hardened latches may be used.[23] A watchdog timer will perform a hard reset of a system unless some sequence is performed that generally indicates the system is alive, such as a write operation from an onboard processor. During normal operation, software schedules a write to the watchdog timer at regular intervals to prevent the timer from running out. If radiation causes the processor to operate incorrectly, it is unlikely the software will work correctly enough to clear the watchdog timer. The watchdog eventually times out and forces a hard reset to the system. This is considered a last resort to other methods of radiation hardening. Radiation-hardened and radiation tolerant components are often used in military and aerospace applications, including point-of-load (POL) applications, satellite system power supplies, step downswitching regulators,microprocessors,FPGAs,[24]FPGA power sources, and high efficiency, low voltage subsystem power supplies. However, not all military-grade components are radiation hardened. For example, the USMIL-STD-883features many radiation-related tests, but has no specification for single event latchup frequency. TheFobos-Gruntspace probe may have failed due to a similar assumption.[14] The market size for radiation hardened electronics used in space applications was estimated to be $2.35 billion in 2021. A new study has estimated that this will reach approximately $4.76 billion by the year 2032.[25][26] Intelecommunication, the termnuclear hardnesshas the following meanings: 1) an expression of the extent to which the performance of asystem, facility, or device is expected to degrade in a given nuclear environment, 2) the physical attributes of a system orelectronic componentthat will allow survival in an environment that includesnuclear radiationand electromagnetic pulses (EMP).
https://en.wikipedia.org/wiki/Radiation_hardening
Virtual collective consciousness(VCC) is a term rebooted and promoted by two behavioral scientists, Yousri Marzouki and Olivier Oullier in their 2012Huffington Postarticle titled: "Revolutionizing Revolutions: Virtual Collective Consciousness and theArab Spring",[1]after its first appearance in 1999-2000.[2]VCC is now defined as an internal knowledge catalyzed bysocial mediaplatforms and shared by a plurality of individuals driven by the spontaneity, the homogeneity, and the synchronicity of their online actions.[3]VCC occurs when a large group of persons, brought together by a social media platform think and act with one mind and share collective emotions.[4]Thus, they are able to coordinate their efforts efficiently, and could rapidly spread their word to a worldwide audience.[5]When interviewed about the concept of VCC that appeared in the book -Hyperconnectivity and the Future of Internet Communication- he edited,[6]Professor ofPervasive Computing,Adrian David Cheokmentioned the following: "The idea of a global (collective) virtual consciousness is a bottom-up process and a rather emergent property resulting from a momentum of complex interactions taking place in social networks. This kind of collective behaviour (or intelligence) results from a collision between a physical world and a virtual world and can have a real impact in our life by driving collective action."[7] In 1999-2000, Richard Glen Boire[2]provided a cursory mention and the only occurrence of the term[citation needed][original research?]"Virtual collective consciousness" in his text as follows: The trend of technology is to overcome the limitations of the human body. And, the Web has been characterized as a virtual collective consciousness and unconsciousness The recent definition of VCC evolved from the first empirical study that provided a cyberpsychological insight into the contribution of Facebook to the 2011Tunisian revolution. In this study, the concept was originally called "collective cyberconsciousness".[8]The latter is an extension of the idea of "collective consciousness" coupled with "citizen media" usage. The authors of this study also made a parallel between this original definition of VCC and other comparable concepts such as Durkheim's collective representation,Žižek's "collective mind"[9]or Boguta's "new collective consciousness" that he used to describe the computational history of the Internet shutdown during theEgyptian revolution.[10]Since VCC is the byproduct of the network's successful actions, then these actions must be timely, acute, rapid, domain-specific, and purpose-oriented to successfully achieve their goal. Before reaching a momentum of complexity, each collective behavior starts by a spark that triggers a chain of events leading to a crystallized stance of a tremendous amount of interactions.[11]Thus, VCC is an emergent global pattern from these individual actions. In 2012, the term virtual collective consciousness resurfaced and was brought to light after extending its applications to the Egyptian case and the whole social networking major impact on the success of the so-calledArab Spring.[1][12]Moreover, the acronym VCC was suggested to identify the theoretical framework covering on-line behaviors leading to a virtual collective consciousness. Hence, online social networks have provided a new and faster way of establishing or modifying "collective consciousness" that was paramount to the 2011 uprisings in the Arab world.[13][14] Various theoretical references ranging from sociology to computer science were mentioned in order to account for the key features that render the framework for a virtual collective consciousness. The following list is not exhaustive, but the references it contains are often highlighted: Besides the studied effect of social networking on the Tunisian and Egyptian revolutions, the former via Facebook and the latter via Twitter other applications were studied under the prism of VCC framework:
https://en.wikipedia.org/wiki/Virtual_collective_consciousness
Incomputing,linked datais structured data which is interlinked with other data so it becomes more useful throughsemantic queries. It builds upon standardWebtechnologies such asHTTP,RDFandURIs, but rather than using them to serve web pages only for human readers, it extends them to share information in a way that can be read automatically by computers. Part of the vision of linked data is for theInternetto become a globaldatabase.[1] Tim Berners-Lee, director of theWorld Wide Web Consortium(W3C), coined the term in a 2006 design note about theSemantic Webproject.[2] Linked data may also beopen data, in which case it is usually described as Linked Open Data.[3] In his 2006 "Linked Data" note,Tim Berners-Leeoutlined four principles of linked data, paraphrased along the following lines:[2] Tim Berners-Lee later restated these principles at a 2009TED conference, again paraphrased along the following lines:[4] Thus, we can identify the following components as essential to a global Linked Data system as envisioned, and to any actual Linked Data subset within it: Linked open dataare linked data that areopen data.[6][7][8]Tim Berners-Lee gives the clearest definition of linked open data as differentiated from linked data. Linked Open Data (LOD) is Linked Data which is released under an open license, which does not impede its reuse for free. Large linked open data sets includeDBpedia,Wikibase,WikidataandOpen ICEcat[uk;nl]. In 2010,Tim Berners-Leesuggested a 5-star scheme for grading the quality of open data on the web, for which the highest ranking is Linked Open Data:[11] The term "linked open data" has been in use since at least February 2007, when the "Linking Open Data" mailing list[12]was created.[13]The mailing list was initially hosted by theSIMILEproject[14]at theMassachusetts Institute of Technology. The goal of the W3C Semantic Web Education and Outreach group's Linking Open Data community project is to extend the Web with adata commonsby publishing variousopendatasetsas RDF on the Web and by settingRDFlinks between data items from different data sources. In October 2007, datasets consisted of over two billion RDFtriples, which were interlinked by over two million RDF links.[16][17]By September 2011 this had grown to 31 billion RDF triples, interlinked by around 504 million RDF links. A detailed statistical breakdown was published in 2014.[18] There are a number ofEuropean Unionprojects involving linked data. These include the linked open data around the clock (LATC) project,[19]the AKN4EU project for machine-readable legislative data,[20]the PlanetData project,[21]the DaPaaS (Data-and-Platform-as-a-Service) project,[22]and the Linked Open Data 2 (LOD2) project.[23][24][25]Data linking is one of the main goals of theEU Open Data Portal, which makes available thousands of datasets for anyone to reuse and link. Ontologiesare formal descriptions of data structures. Some of the better known ontologies are: Clickable diagrams that show the individual datasets and their relationships within the DBpedia-spawned LOD cloud (as by the figures to the right) are available.[30][31]
https://en.wikipedia.org/wiki/Linked_data
Inmathematics,Midy's theorem, named afterFrenchmathematicianE. Midy,[1]is a statement about thedecimal expansionoffractionsa/pwherepis aprimeanda/phas arepeating decimalexpansion with anevenperiod (sequenceA028416in theOEIS). If the period of the decimal representation ofa/pis 2n, so that ap=0.a1a2a3…anan+1…a2n¯{\displaystyle {\frac {a}{p}}=0.{\overline {a_{1}a_{2}a_{3}\dots a_{n}a_{n+1}\dots a_{2n}}}} then the digits in the second half of the repeating decimal period are the9s complementof the corresponding digits in its first half. In other words, ai+ai+n=9{\displaystyle a_{i}+a_{i+n}=9}a1…an+an+1…a2n=10n−1.{\displaystyle a_{1}\dots a_{n}+a_{n+1}\dots a_{2n}=10^{n}-1.} For example, 113=0.076923¯and076+923=999.{\displaystyle {\frac {1}{13}}=0.{\overline {076923}}{\text{ and }}076+923=999.}117=0.0588235294117647¯and05882352+94117647=99999999.{\displaystyle {\frac {1}{17}}=0.{\overline {0588235294117647}}{\text{ and }}05882352+94117647=99999999.} Ifkis any divisor ofh(wherehis the number of digits of the period of the decimal expansion ofa/p(wherepis again a prime)), then Midy's theorem can be generalised as follows. Theextended Midy's theorem[2]states that if the repeating portion of the decimal expansion ofa/pis divided intok-digit numbers, then their sum is a multiple of 10k− 1. For example,119=0.052631578947368421¯{\displaystyle {\frac {1}{19}}=0.{\overline {052631578947368421}}}has a period of 18. Dividing the repeating portion into 6-digit numbers and summing them gives052631+578947+368421=999999.{\displaystyle 052631+578947+368421=999999.}Similarly, dividing the repeating portion into 3-digit numbers and summing them gives052+631+578+947+368+421=2997=3×999.{\displaystyle 052+631+578+947+368+421=2997=3\times 999.} Midy's theorem and its extension do not depend on special properties of the decimal expansion, but work equally well in anybaseb, provided we replace 10k− 1 withbk− 1 and carry out addition in baseb. For example, inoctal 119=0.032745¯80328+7458=7778038+278+458=778.{\displaystyle {\begin{aligned}&{\frac {1}{19}}=0.{\overline {032745}}_{8}\\[8pt]&032_{8}+745_{8}=777_{8}\\[8pt]&03_{8}+27_{8}+45_{8}=77_{8}.\end{aligned}}} Indozenal(using inverted two and three for ten and eleven, respectively) Short proofs of Midy's theorem can be given using results fromgroup theory. However, it is also possible to prove Midy's theorem usingelementary algebraandmodular arithmetic: Letpbe a prime anda/pbe a fraction between 0 and 1. Suppose the expansion ofa/pin basebhas a period ofℓ, so ap=[0.a1a2…aℓ¯]b⇒apbℓ=[a1a2…aℓ.a1a2…aℓ¯]b⇒apbℓ=N+[0.a1a2…aℓ¯]b=N+ap⇒ap=Nbℓ−1{\displaystyle {\begin{aligned}&{\frac {a}{p}}=[0.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}\\[6pt]&\Rightarrow {\frac {a}{p}}b^{\ell }=[a_{1}a_{2}\dots a_{\ell }.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}\\[6pt]&\Rightarrow {\frac {a}{p}}b^{\ell }=N+[0.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}=N+{\frac {a}{p}}\\[6pt]&\Rightarrow {\frac {a}{p}}={\frac {N}{b^{\ell }-1}}\end{aligned}}} whereNis the integer whose expansion in basebis the stringa1a2...aℓ. Note thatbℓ− 1 is a multiple ofpbecause (bℓ− 1)a/pis an integer. Alsobn−1 isnota multiple ofpfor any value ofnless thanℓ, because otherwise the repeating period ofa/pin basebwould be less thanℓ. Now suppose thatℓ=hk. Thenbℓ− 1 is a multiple ofbk− 1. (To see this, substitutexforbk; thenbℓ=xhandx− 1 is a factor ofxh− 1. ) Saybℓ− 1 =m(bk− 1), so ap=Nm(bk−1).{\displaystyle {\frac {a}{p}}={\frac {N}{m(b^{k}-1)}}.} Butbℓ− 1 is a multiple ofp;bk− 1 isnota multiple ofp(becausekis less thanℓ); andpis a prime; sommust be a multiple ofpand amp=Nbk−1{\displaystyle {\frac {am}{p}}={\frac {N}{b^{k}-1}}} is an integer. In other words, N≡0(modbk−1).{\displaystyle N\equiv 0{\pmod {b^{k}-1}}.} Now split the stringa1a2...aℓintohequal parts of lengthk, and let these represent the integersN0...Nh− 1in baseb, so that Nh−1=[a1…ak]bNh−2=[ak+1…a2k]b⋮N0=[al−k+1…al]b{\displaystyle {\begin{aligned}N_{h-1}&=[a_{1}\dots a_{k}]_{b}\\N_{h-2}&=[a_{k+1}\dots a_{2k}]_{b}\\&{}\ \ \vdots \\N_{0}&=[a_{l-k+1}\dots a_{l}]_{b}\end{aligned}}} To prove Midy's extended theorem in basebwe must show that the sum of thehintegersNiis a multiple ofbk− 1. Sincebkis congruent to 1 modulobk− 1, any power ofbkwill also be congruent to 1 modulobk− 1. So N=∑i=0h−1Nibik=∑i=0h−1Ni(bk)i{\displaystyle N=\sum _{i=0}^{h-1}N_{i}b^{ik}=\sum _{i=0}^{h-1}N_{i}(b^{k})^{i}}⇒N≡∑i=0h−1Ni(modbk−1){\displaystyle \Rightarrow N\equiv \sum _{i=0}^{h-1}N_{i}{\pmod {b^{k}-1}}}⇒∑i=0h−1Ni≡0(modbk−1){\displaystyle \Rightarrow \sum _{i=0}^{h-1}N_{i}\equiv 0{\pmod {b^{k}-1}}} which proves Midy's extended theorem in baseb. To prove the original Midy's theorem, take the special case whereh= 2. Note thatN0andN1are both represented by strings ofkdigits in basebso both satisfy 0≤Ni≤bk−1.{\displaystyle 0\leq N_{i}\leq b^{k}-1.} N0andN1cannot both equal 0 (otherwisea/p= 0) and cannot both equalbk− 1 (otherwisea/p= 1), so 0<N0+N1<2(bk−1){\displaystyle 0<N_{0}+N_{1}<2(b^{k}-1)} and sinceN0+N1is a multiple ofbk− 1, it follows that N0+N1=bk−1.{\displaystyle N_{0}+N_{1}=b^{k}-1.} From the above,amp{\displaystyle {\frac {am}{p}}}is an integer Thusm≡0(modp){\displaystyle m\equiv 0{\pmod {p}}} And thus fork=ℓ2{\displaystyle k={\frac {\ell }{2}}} bℓ/2+1≡0(modp){\displaystyle b^{\ell /2}+1\equiv 0{\pmod {p}}} Fork=ℓ3{\displaystyle k={\frac {\ell }{3}}}and is an integer b2ℓ/3+bℓ/3+1≡0(modp){\displaystyle b^{2\ell /3}+b^{\ell /3}+1\equiv 0{\pmod {p}}} and so on.
https://en.wikipedia.org/wiki/Midy%27s_theorem
TheRSA(Rivest–Shamir–Adleman)cryptosystemis apublic-key cryptosystem, one of the oldest widely used for secure data transmission. Theinitialism"RSA" comes from the surnames ofRon Rivest,Adi ShamirandLeonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 atGovernment Communications Headquarters(GCHQ), the Britishsignals intelligenceagency, by the English mathematicianClifford Cocks. That system wasdeclassifiedin 1997.[2] In a public-keycryptosystem, theencryption keyis public and distinct from thedecryption key, which is kept secret (private). An RSA user creates and publishes a public key based on two largeprime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decrypted by someone who knows the private key.[1] The security of RSA relies on the practical difficulty offactoringthe product of two largeprime numbers, the "factoring problem". Breaking RSA encryption is known as theRSA problem. Whether it is as difficult as the factoring problem is an open question.[3]There are no published methods to defeat the system if a large enough key is used. RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys forsymmetric-keycryptography, which are then used for bulk encryption–decryption. The idea of an asymmetric public-private key cryptosystem is attributed toWhitfield DiffieandMartin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]Moreover, likeDiffie-Hellman, RSA is based onmodular exponentiation. Ron Rivest,Adi Shamir, andLeonard Adlemanat theMassachusetts Institute of Technologymade several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements.[5]In April 1977, they spentPassoverat the house of a student and drank a good deal of wine before returning to their homes at around midnight.[6]Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.[7] Clifford Cocks, an Englishmathematicianworking for theBritishintelligence agencyGovernment Communications Headquarters(GCHQ), described a similar system in an internal document in 1973.[8]However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification. Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous tosimplified DES.[9][10][11][12][13] Apatentdescribing the RSA algorithm was granted toMITon 20 September 1983:U.S. patent 4,405,829"Cryptographic communications system and method". FromDWPI's abstract of the patent: The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver). A detailed description of the algorithm was published in August 1977, inScientific American'sMathematical Gamescolumn.[7]This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside theUnited States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either. When the patent was issued,terms of patentwere 17 years. The patent was about to expire on 21 September 2000, butRSA Securityreleased the algorithm to the public domain on 6 September 2000.[14] The RSA algorithm involves four steps:keygeneration, key distribution, encryption, and decryption. A basic principle behind RSA is the observation that it is practical to find three very large positive integerse,d, andn, such that for all integersm(0 ≤m<n), both(me)d{\displaystyle (m^{e})^{d}}andm{\displaystyle m}have the sameremainderwhen divided byn{\displaystyle n}(they arecongruent modulon{\displaystyle n}):(me)d≡m(modn).{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}However, when given onlyeandn, it is extremely difficult to findd. The integersnandecomprise the public key,drepresents the private key, andmrepresents the message. Themodular exponentiationtoeanddcorresponds to encryption and decryption, respectively. In addition, because the two exponentscan be swapped, the private and public key can also be swapped, allowing for messagesigning and verificationusing the same algorithm. The keys for the RSA algorithm are generated in the following way: Thepublic keyconsists of the modulusnand the public (or encryption) exponente. Theprivate keyconsists of the private (or decryption) exponentd, which must be kept secret.p,q, andλ(n)must also be kept secret because they can be used to calculated. In fact, they can all be discarded afterdhas been computed.[16] In the original RSA paper,[1]theEuler totient functionφ(n) = (p− 1)(q− 1)is used instead ofλ(n)for calculating the private exponentd. Sinceφ(n)is always divisible byλ(n), the algorithm works as well. The possibility of usingEuler totient functionresults also fromLagrange's theoremapplied to themultiplicative group of integers modulopq. Thus anydsatisfyingd⋅e≡ 1 (modφ(n))also satisfiesd⋅e≡ 1 (modλ(n)). However, computingdmoduloφ(n)will sometimes yield a result that is larger than necessary (i.e.d>λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponentdat all, rather than using the optimized decryption methodbased on the Chinese remainder theoremdescribed below), but some standards such asFIPS 186-4(Section B.3.1) may require thatd<λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced moduloλ(n)to obtain a smaller equivalent exponent. Since any common factors of(p− 1)and(q− 1)are present in the factorisation ofn− 1=pq− 1=(p− 1)(q− 1) + (p− 1) + (q− 1),[17][self-published source?]it is recommended that(p− 1)and(q− 1)have only very small common factors, if any, besides the necessary 2.[1][18][19][failed verification][20][failed verification] Note: The authors of the original RSA paper carry out the key generation by choosingdand then computingeas themodular multiplicative inverseofdmoduloφ(n), whereas most current implementations of RSA, such as those followingPKCS#1, do the reverse (chooseeand computed). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.[1][21] Suppose thatBobwants to send information toAlice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message. To enable Bob to send his encrypted messages, Alice transmits her public key(n,e)to Bob via a reliable, but not necessarily secret, route. Alice's private key(d)is never distributed. After Bob obtains Alice's public key, he can send a messageMto Alice. To do it, he first turnsM(strictly speaking, the un-padded plaintext) into an integerm(strictly speaking, thepaddedplaintext), such that0 ≤m<nby using an agreed-upon reversible protocol known as apadding scheme. He then computes the ciphertextc, using Alice's public keye, corresponding to c≡me(modn).{\displaystyle c\equiv m^{e}{\pmod {n}}.} This can be done reasonably quickly, even for very large numbers, usingmodular exponentiation. Bob then transmitscto Alice. Note that at least nine values ofmwill yield a ciphertextcequal tom,[a]but this is very unlikely to occur in practice. Alice can recovermfromcby using her private key exponentdby computing cd≡(me)d≡m(modn).{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.} Givenm, she can recover the original messageMby reversing the padding scheme. Here is an example of RSA encryption and decryption:[b] Thepublic keyis(n= 3233,e= 17). For a paddedplaintextmessagem, the encryption function isc(m)=memodn=m17mod3233.{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}} Theprivate keyis(n= 3233,d= 413). For an encryptedciphertextc, the decryption function ism(c)=cdmodn=c413mod3233.{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}} For instance, in order to encryptm= 65, one calculatesc=6517mod3233=2790.{\displaystyle c=65^{17}{\bmod {3}}233=2790.} To decryptc= 2790, one calculatesm=2790413mod3233=65.{\displaystyle m=2790^{413}{\bmod {3}}233=65.} Both of these calculations can be computed efficiently using thesquare-and-multiply algorithmformodular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factorn= 3233(obtained from the freely available public key) back to the primespandq.e, also from the public key, is then inverted to getd, thus acquiring the private key. Practical implementations use theChinese remainder theoremto speed up the calculation using modulus of factors (modpqusing modpand modq). The valuesdp,dqandqinv, which are part of the private key are computed as follows:dp=dmod(p−1)=413mod(61−1)=53,dq=dmod(q−1)=413mod(53−1)=49,qinv=q−1modp=53−1mod61=38⇒(qinv×q)modp=38×53mod61=1.{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}} Here is howdp,dqandqinvare used for efficient decryption (encryption is efficient by choice of a suitabledandepair):m1=cdpmodp=279053mod61=4,m2=cdqmodq=279049mod53=12,h=(qinv×(m1−m2))modp=(38×−8)mod61=1,m=m2+h×q=12+1×53=65.{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}} SupposeAliceusesBob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used tosigna message. Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces ahash valueof the message, raises it to the power ofd(modulon) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power ofe(modulon) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent. This works because ofexponentiationrules:h=hash⁡(m),{\displaystyle h=\operatorname {hash} (m),}(he)d=hed=hde=(hd)e≡h(modn).{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.} Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to: The proof of the correctness of RSA is based onFermat's little theorem, stating thatap− 1≡ 1 (modp)for any integeraand primep, not dividinga.[note 1] We want to show that(me)d≡m(modpq){\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}for every integermwhenpandqare distinct prime numbers andeanddare positive integers satisfyinged≡ 1 (modλ(pq)). Sinceλ(pq) =lcm(p− 1,q− 1)is, by construction, divisible by bothp− 1andq− 1, we can writeed−1=h(p−1)=k(q−1){\displaystyle ed-1=h(p-1)=k(q-1)}for some nonnegative integershandk.[note 2] To check whether two numbers, such asmedandm, are congruentmodpq, it suffices (and in fact is equivalent) to check that they are congruentmodpandmodqseparately.[note 3] To showmed≡m(modp), we consider two cases: The verification thatmed≡m(modq)proceeds in a completely analogous way: This completes the proof that, for any integerm, and integerse,dsuch thated≡ 1 (modλ(pq)),(me)d≡m(modpq).{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.} Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead onEuler's theorem. We want to show thatmed≡m(modn), wheren=pqis a product of two different prime numbers, andeanddare positive integers satisfyinged≡ 1 (modφ(n)). Sinceeanddare positive, we can writeed= 1 +hφ(n)for some non-negative integerh.Assumingthatmis relatively prime ton, we havemed=m1+hφ(n)=m(mφ(n))h≡m(1)h≡m(modn),{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},} where the second-last congruence follows fromEuler's theorem. More generally, for anyeanddsatisfyinged≡ 1 (modλ(n)), the same conclusion follows fromCarmichael's generalization of Euler's theorem, which states thatmλ(n)≡ 1 (modn)for allmrelatively prime ton. Whenmis not relatively prime ton, the argument just given is invalid. This is highly improbable (only a proportion of1/p+ 1/q− 1/(pq)numbers have this property), but even in this case, the desired congruence is still true. Eitherm≡ 0 (modp)orm≡ 0 (modq), and these cases can be treated using the previous proof. There are a number of attacks against plain RSA as described below. To avoid these problems, practical RSA implementations typically embed some form of structured, randomizedpaddinginto the valuembefore encrypting it. This padding ensures thatmdoes not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such asPKCS#1have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintextmwith some number of additional bits, the size of the un-padded messageMmust be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, atCrypto1998, Bleichenbacher showed that this version is vulnerable to a practicaladaptive chosen-ciphertext attack. Furthermore, atEurocrypt2000, Coron et al.[25]showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard includeOptimal Asymmetric Encryption Padding(OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771andU.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents.[original research?]Note that using different RSA key pairs for encryption and signing is potentially more secure.[26] For efficiency, many popular crypto libraries (such asOpenSSL,Javaand.NET) use for decryption and signing the following optimization based on theChinese remainder theorem.[27][citation needed]The following values are precomputed and stored as part of the private key: These values allow the recipient to compute the exponentiationm=cd(modpq)more efficiently as follows:m1=cdP(modp){\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}},m2=cdQ(modq){\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}},h=qinv(m1−m2)(modp){\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}},[c]m=m2+hq{\displaystyle m=m_{2}+hq}. This is more efficient than computingexponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. The security of the RSA cryptosystem is based on two mathematical problems: the problem offactoring large numbersand theRSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems arehard, i.e., no efficient algorithm exists for solving them. Providing security againstpartialdecryption may require the addition of a securepadding scheme.[28] TheRSA problemis defined as the task of takingeth roots modulo a compositen: recovering a valuemsuch thatc≡me(modn), where(n,e)is an RSA public key, andcis an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulusn. With the ability to recover prime factors, an attacker can compute the secret exponentdfrom a public key(n,e), then decryptcusing the standard procedure. To accomplish this, an attacker factorsnintopandq, and computeslcm(p− 1,q− 1)that allows the determination ofdfrome. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; seeinteger factorizationfor a discussion of this problem. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[29]By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-coreAthlon64with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted[1]that Miller has shown that – assuming the truth of theextended Riemann hypothesis– findingdfromnandeis as hard as factoringnintopandq(up to a polynomial time difference).[30]However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. As of 2020[update], the largest publicly known factoredRSA numberhad 829 bits (250 decimal digits,RSA-250).[31]Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003,RSA Securityestimated that 1024-bit keys were likely to become crackable by 2010.[32]As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[33]It is generally presumed that RSA is secure ifnis sufficiently large, outside of quantum computing. Ifnis 300bitsor shorter, it can be factored in a few hours on apersonal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, whenRSA-155was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011.[34]A theoretical hardware device namedTWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.[32] In 1994,Peter Shorshowed that aquantum computer– if one could ever be practically created for the purpose – would be able to factor inpolynomial time, breaking RSA; seeShor's algorithm. Finding the large primespandqis usually done by testing random numbers of the correct size with probabilisticprimality teststhat quickly eliminate virtually all of the nonprimes. The numberspandqshould not be "too close", lest theFermat factorizationfornbe successful. Ifp−qis less than2n1/4(n=p⋅q, which even for "small" 1024-bit values ofnis3×1077), solving forpandqis trivial. Furthermore, if eitherp− 1orq− 1has only small prime factors,ncan be factored quickly byPollard'sp− 1 algorithm, and hence such values ofporqshould be discarded. It is important that the private exponentdbe large enough. Michael J. Wiener showed that ifpis betweenqand2q(which is quite typical) andd<n1/4/3, thendcan be computed efficiently fromnande.[35] There is no known attack against small public exponents such ase= 3, provided that the proper padding is used.Coppersmith's attackhas many applications in attacking RSA specifically if the public exponenteis small and if the encrypted message is short and not padded.65537is a commonly used value fore; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponentsesmaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers fromMasaryk Universityannounced theROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library fromInfineonknown as RSALib. A large number ofsmart cardsandtrusted platform modules(TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.[36] A cryptographically strongrandom number generator, which has been properly seeded with adequate entropy, must be used to generate the primespandq. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 byArjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.[37][38][self-published source?] They exploited a weakness unique to cryptosystems based on integer factorization. Ifn=pqis one public key, andn′ =p′q′is another, then if by chancep=p′(butqis not equal toq'), then a simple computation ofgcd(n,n′) =pfactors bothnandn', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to chooseqgivenp, instead of choosingpandqindependently. Nadia Heningerwas part of a group that did a similar experiment. They used an idea ofDaniel J. Bernsteinto compute the GCD of each RSA keynagainst the product of all the other keysn' they had found (a 729-million-digit number), instead of computing eachgcd(n,n′)separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size. Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise oratmospheric noisefrom a radio receiver tuned between stations should solve the problem.[39] Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly. Kocherdescribed a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption keydquickly. This attack can also be applied against the RSA signature scheme. In 2003,BonehandBrumleydemonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from aSecure Sockets Layer(SSL)-enabled webserver).[40]This attack takes advantage of information leaked by theChinese remainder theoremoptimization used by many RSA implementations. One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known ascryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computingcd(modn), Alice first chooses a secret random valuerand computes(rec)d(modn). The result of this computation, after applyingEuler's theorem, isrcd(modn), and so the effect ofrcan be removed by multiplying by its inverse. A new value ofris chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails. In 1998,Daniel Bleichenbacherdescribed the first practicaladaptive chosen-ciphertext attackagainst RSA-encrypted messages using the PKCS #1 v1padding scheme(a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of theSecure Sockets Layerprotocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such asOptimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks. A variant of this attack, dubbed "BERserk", came back in 2014.[41][42]It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome. A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use abranch predictorto determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implementsimultaneous multithreading(SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors. Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis",[43]the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations. A power-fault attack on RSA implementations was described in 2010.[44]The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server. There are many details to keep in mind in order to implement RSA securely (strongPRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point the book Practical Cryptography With Go suggests avoiding RSA if possible.[45] Some cryptography libraries that provide support for RSA include:
https://en.wikipedia.org/wiki/RSA_(cryptosystem)
DHS media monitoring servicesis a proposedUnited States Department of Homeland Securitydatabase to keep track of 290,000 global news sources and media influencers to monitor sentiment. Privacy and free speech advocates have criticized the project's far-reaching scope, likening it to apanopticon.[1][2][3][4][5][6][7][8]The DHS has replied that "Despite what some reporters may suggest, this is nothing more than the standard practice of monitoring current events in the media. Any suggestion otherwise is fit for tin foil hat wearing, black helicopter conspiracy theorists."[9][5]It will also look at trade and industry publications, local, national and international outlets, and social media, according to documents. The plans also encompass media coverage being tracked in more than 100 languages including Arabic, Chinese, and Russian, with instant translation of articles into English. The DHS Media Monitoring plan would allow for "24/7 access to a password protected, media influencer database, including journalist, editors, correspondents, social media influencers, bloggers etc" to identify "any and all media coverage related to the Department of Homeland Security or a particular event."[10] The DHS has noted that agencies under its purview already operate similar databases.[11]Several news organizations have noted that similar services, though narrower in scope, already exist and the proposed DHS service would be the norm within thenews industry.[12][13] Several organizations have come out opposing the creation of the service:Occupy movement[14]andReporters Committee for Freedom of the Press.[15] Beginning in January 2010, the NOC launched Media Monitoring Capability (MMC) pilots using social media monitoring related to specific mission-related incidents and international events. These pilots were conducted to help fulfill the NOC's statutory responsibility to provide situational awareness and to access potentially valuable public information within the social media realm. Prior to implementation of each social media pilot, the DHS Privacy Office and OPS developed detailed standards and procedures for reviewing information on social media web sites.[16] In February 2012, the House of Representatives held a hearing with concerns to counter cyber-terrorism, as well as other acts of criminal activity, whilst maintaining the privacy rights of Americans. The DHS was discussed on its methodology and usage of social media services. In one example, DHS used multiple social networking blogs, including Facebook and Twitter, three different blogs, and reader comments in newspapers to capture the reaction of residents to a possible plan to bring Guantanamo detainees to a local prison in Standish, Michigan.[17] Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/DHS_media_monitoring_services
American Fuzzy Lop(AFL), stylized inall lowercaseasamerican fuzzy lop, is afree softwarefuzzerthat employsgenetic algorithmsin order to efficiently increasecode coverageof thetest cases. So far it has detected hundreds of significantsoftware bugsin major free software projects, includingX.Org Server,[2]PHP,[3]OpenSSL,[4][5]pngcrush,bash,[6]Firefox,[7]BIND,[8][9]Qt,[10]andSQLite.[11] Initially released in November 2013, AFL[12]quickly became one of the most widely used fuzzers in security research. For many years after its release, AFL has been considered a "state of the art" fuzzer.[13]AFL is considered "a de-facto standard for fuzzing",[14]and the release of AFL contributed significantly to the development of fuzzing as a research area.[15]AFL is widely used in academia; academic fuzzers are oftenforksof AFL, and AFL is commonly used as a baseline to evaluate new techniques.[16][17] Thesource codeof American fuzzy lop is published onGitHub. Its name is a reference to a breed of rabbit, theAmerican Fuzzy Lop. AFL requires the user to provide a sample command that runs the tested application and at least one small example input. The input can be fed to the tested program either via standard input or as an input file specified in the process command line. Fuzzing networked programs is currently not directly supported, although in some cases there are feasible solutions to this problem.[18]For example, in case of an audio player, American fuzzy lop can be instructed to open a short sound file with it. Then, the fuzzer attempts to actually execute the specified command and if that succeeds, it tries to reduce the input file to the smallest one that triggers the same behavior. After this initial phase, AFL begins the actual process of fuzzing by applying various modifications to the input file. When the tested programcrashesorhangs, this usually implies the discovery of a new bug, possibly asecurity vulnerability. In this case, the modified input file is saved for further user inspection. In order to maximize the fuzzing performance, American fuzzy lop expects the tested program to becompiledwith the aid of autility programthatinstrumentsthe code with helper functions which trackcontrol flow. This allows the fuzzer to detect when the target's behavior changes in response to the input. In cases when this is not possible,black-box testingis supported as well. Fuzzers attempt to find unexpected behaviors (i.e.,bugs) in a target program by repeatedly executing the program on various inputs. As described above, AFL is agray-boxfuzzer, meaning it expects instrumentation to measurecode coverageto have been injected into the target program at compile time and uses the coverage metric to direct the generation of new inputs. AFL's fuzzing algorithm has influenced many subsequent gray-box fuzzers.[20][21] The inputs to AFL are an instrumentedtarget program(thesystem under test) andcorpus, that is, a collection of inputs to the target. Inputs are also known astest cases. The algorithm maintains aqueueof inputs, which is initialized to the input corpus. The overall algorithm works as follows:[22] To generate new inputs, AFL applies variousmutationsto existing inputs.[23]These mutations are mostly agnostic to the input format of the target program; they generally treat the input as simple blob ofbinarydata. At first, AFL applies adeterministicsequence of mutations to each input. These are applied at various offsets in the input. They include:[24][25] After applying all available deterministic mutations, AFL moves on tohavoc, a stage where between 2 and 128 mutations are applied in a row. These mutations are any of:[23] If AFL cycles through the entire queue without generating any input that achieves new code coverage, it beginssplicing. Splicing takes two inputs from the queue,truncatesthem at arbitrary positions,concatenatesthem together, and applies the havoc stage to the result. AFL pioneered the use ofbinned hitcountsfor measuring code coverage.[28]The author claims that this technique mitigatespath explosion.[29][30] Conceptually, AFL counts the number of times a given execution of the target traverses each edge in the target'scontrol-flow graph; the documentation refers to these edges astuplesand the counts ashitcounts. At the end of the execution, the hitcounts arebinnedorbucketedinto the following eight buckets: 1, 2, 3, 4–7, 8–15, 16–31, 32–127, and 128 and greater. AFL maintains a global set of (tuple, binned count) pairs that have been produced by any execution thus far. An input is considered "interesting" and is added to the queue if it produces a (tuple, binned count) pair that is not yet in the global set. In practice, the hitcounts are collected and processed using an efficient butlossyscheme. The compile-time instrumentation injects code that is conceptually similar to the following at each branch in the control-flow graph of the target program:[31] where<COMPILE_TIME_RANDOM>is a random integer andshared_memis a 64kilobyteregion of memorysharedbetween the fuzzer and the target. This representation is more fine-grained (distinguishes between more executions) than simple block or statement coverage, but still allows for a linear-time "interestingness" test. On the assumption that smaller inputs take less time to execute, AFL attempts to minimize ortrimthe test cases in the queue.[23][32]Trimming works by removing blocks from the input; if the trimmed input still results in the same coverage (see#Measuring coverage), then the original input is discarded and the trimmed input is saved in the queue. AFL selects a subset offavoredinputs from the queue, non-favored inputs are skipped with some probability.[33][28] One of the challenges American fuzzy lop had to solve involved an efficientspawningof hundreds of processes per second. Apart from the original engine that spawned every process from scratch, American fuzzy lop offers the default engine that relies heavily on theforksystem call.[34][28]This can further be sped up by leveragingLLVMdeferred fork server mode or the similar persistent mode, but this comes at the cost of having to modify the tested program.[35]Also, American fuzzy lop supports fuzzing the same program over the network. American fuzzy lop features a colorfulcommand line interfacethat displays real-time statistics about the fuzzing process. Various settings may be triggered by either command line options orenvironment variables. Apart from that, programs may read runtime statistics from files in a machine-readable format. In addition toafl-fuzzand tools that can be used for binary instrumentation, American fuzzy lop features utility programs meant for monitoring of the fuzzing process. Apart from that, there isafl-cminandafl-tmin, which can be used for test case and test corpus minimization. This can be useful when the test cases generated byafl-fuzzwould be used by other fuzzers. AFL has beenforkedmany times in order to examine new fuzzing techniques, or to apply fuzzing to different kinds of programs. A few notable forks include: AFL++(AFLplusplus)[43]is a community-maintainedforkof AFL created due to the relative inactivity ofGoogle's upstream AFL development since September 2017. It includes new features and speedups.[44] Google's OSS-Fuzz initiative, which provides free fuzzing services to open source software, replaced its AFL option with AFL++ in January 2021.[45][46]
https://en.wikipedia.org/wiki/American_fuzzy_lop_(fuzzer)
Incomputer science,static program analysis(also known asstatic analysisorstatic simulation) is theanalysisof computer programs performed without executing them, in contrast withdynamic program analysis, which is performed on programs during their execution in the integrated environment.[1][2] The term is usually applied to analysis performed by an automated tool, with human analysis typically being called "program understanding",program comprehension, orcode review. In the last of these,software inspectionandsoftware walkthroughsare also used. In most cases the analysis is performed on some version of a program'ssource code, and, in other cases, on some form of itsobject code. The sophistication of the analysis performed by tools varies from those that only consider the behaviour of individual statements and declarations,[3]to those that include the completesource codeof a program in their analysis. The uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., thelinttool) toformal methodsthat mathematically prove properties about a given program (e.g., its behaviour matches that of its specification). Software metricsandreverse engineeringcan be described as forms of static analysis. Deriving software metrics and static analysis are increasingly deployed together, especially in creation of embedded systems, by defining so-calledsoftware quality objectives.[4] A growing commercial use of static analysis is in the verification of properties of software used insafety-criticalcomputer systems and locating potentiallyvulnerablecode.[5]For example, the following industries have identified the use of static code analysis as a means of improving the quality of increasingly sophisticated and complex software: A study in 2012 by VDC Research reported that 28.7% of the embedded software engineers surveyed use static analysis tools and 39.7% expect to use them within 2 years.[9]A study from 2010 found that 60% of the interviewed developers in European research projects made at least use of their basic IDE built-in static analyzers. However, only about 10% employed an additional other (and perhaps more advanced) analysis tool.[10] In the application security industry the namestatic application security testing(SAST) is also used. SAST is an important part ofSecurity Development Lifecycles(SDLs) such as the SDL defined by Microsoft[11]and a common practice in software companies.[12] The OMG (Object Management Group) published a study regarding the types of software analysis required forsoftware qualitymeasurement and assessment. This document on "How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations" describes three levels of software analysis.[13] A further level of software analysis can be defined. Formal methods is the term applied to the analysis ofsoftware(andcomputer hardware) whose results are obtained purely through the use of rigorous mathematical methods. The mathematical techniques used includedenotational semantics,axiomatic semantics,operational semantics, andabstract interpretation. By a straightforward reduction to thehalting problem, it is possible to prove that (for anyTuring completelanguage), finding all possible run-time errors in an arbitrary program (or more generally any kind of violation of a specification on the final result of a program) isundecidable: there is no mechanical method that can always answer truthfully whether an arbitrary program may or may not exhibit runtime errors. This result dates from the works ofChurch,GödelandTuringin the 1930s (see:Halting problemandRice's theorem). As with many undecidable questions, one can still attempt to give useful approximate solutions. Some of the implementation techniques of formal static analysis include:[14] Data-driven static analysis leverages extensive codebases to infer coding rules and improve the accuracy of the analysis.[16][17]For instance, one can use all Java open-source packages available onGitHubto learn good analysis strategies. The rule inference can use machine learning techniques.[18]It is also possible to learn from a large amount of past fixes and warnings.[16] Static analyzers produce warnings. For certain types of warnings, it is possible to design and implementautomated remediationtechniques. For example, Logozzo and Ball have proposed automated remediations for C#cccheck.[19]
https://en.wikipedia.org/wiki/Static_code_analysis
Incryptography, acipher(orcypher) is analgorithmfor performingencryptionordecryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term isencipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especiallyclassical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a largecodebookwhich linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates.". When using a cipher the original information is known asplaintext, and the encrypted form asciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called akey(or, in traditionalNSAparlance, acryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message, with some exceptions such asROT13andAtbash. Most modern ciphers can be categorized in several ways: Originating from the Sanskrit word for zero शून्य (śuṇya), via the Arabic word صفر (ṣifr), the word "cipher"(see etymology) spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept ofzero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.[1] The termcipherwas later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers". In casual contexts, "code" and "cipher" can typically be used interchangeably; however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message. An example of this is thecommercial telegraph codewhich was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges oftelegrams. Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizesKanji(meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬".Stenographerssometimes use specific symbols to abbreviate whole words. Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, usingsuperenciphermentto increase the security. In some cases the termscodesandciphersare used synonymously withsubstitutionandtransposition, respectively. Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding,codetext,decoding" and so on. However, codes have a variety of drawbacks, including susceptibility tocryptanalysisand the difficulty of managing a cumbersomecodebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique. There are a variety of different types of encryption. Algorithms used earlier in thehistory of cryptographyare substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys. TheCaesar Cipheris one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[citation needed] Historical pen and paper ciphers used in the past are sometimes known asclassical ciphers. They include simplesubstitution ciphers(such asROT13) andtransposition ciphers(such as aRail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.[2][3] In the 1640s, the Parliamentarian commander,Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during theEnglish Civil War.[4]The English theologian John Wilkins published a book in 1641 titled "Mercury, or The Secret and Swift Messenger" and described a musical cipher wherein letters of the alphabet were substituted for music notes.[5][6]This species of melodic cipher was depicted in greater detail by author Abraham Rees in his book Cyclopædia (1778).[7] Simple ciphers were replaced bypolyalphabetic substitutionciphers (such as theVigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack.[8]It is possible to create a secure pen and paper cipher based on aone-time pad, but these have other disadvantages. During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. Inrotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the BritishBombewere invented to crack these encryption methods. Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data. By type of key used ciphers are divided into: In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure. Ciphers can be distinguished into two types by the type of input data: In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count: Since the desired effect is computational difficulty, in theory one would choose analgorithmand desired difficulty level, thus decide the key length accordingly. Claude Shannonproved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once:one-time pad.[9]
https://en.wikipedia.org/wiki/Cipher#Perfect_secrecy
Isaac Asimov(/ˈæzɪmɒv/AZ-im-ov;[b][c]c.January 2, 1920[a]– April 6, 1992) was an American writer and professor ofbiochemistryatBoston University. During his lifetime, Asimov was considered one of the "Big Three"science fictionwriters, along withRobert A. HeinleinandArthur C. Clarke.[2]A prolific writer, he wrote or edited more than 500 books. He also wrote an estimated 90,000 letters andpostcards.[d]Best known for hishard science fiction, Asimov also wrotemysteriesandfantasy, as well aspopular scienceand othernon-fiction. Asimov's most famous work is theFoundationseries,[3]the first three books of which won the one-timeHugo Awardfor "Best All-Time Series" in 1966.[4]His other major series are theGalactic Empireseries and theRobotseries. TheGalactic Empirenovels are set in the much earlier history of the same fictional universe as theFoundationseries. Later, withFoundation and Earth(1986), he linked this distant future to theRobotseries, creating a unified "future history" for his works.[5]He also wrotemore than 380 short stories, including thesocial science fictionnovelette "Nightfall", which in 1964 was voted the best short science fiction story of all time by theScience Fiction Writers of America. Asimov wrote theLucky Starrseries ofjuvenilescience-fiction novels using the pen name Paul French.[6] Most of his popular science books explain concepts in a historical way, going as far back as possible to a time when the science in question was at its simplest stage. Examples includeGuide to Science, the three-volumeUnderstanding Physics, andAsimov's Chronology of Science and Discovery. He wrote on numerous other scientific and non-scientific topics, such aschemistry,astronomy,mathematics,history,biblical exegesis, andliterary criticism. He was the president of theAmerican Humanist Association.[7]Several entities have been named in his honor, including theasteroid(5020) Asimov,[8]acrateronMars,[9][10]aBrooklynelementary school,[11]Honda's humanoid robotASIMO,[12]andfour literary awards. There are three very simple English words: 'Has', 'him' and 'of'. Put them together like this—'has-him-of'—and say it in the ordinary fashion. Now leave out the two h's and say it again and you have Asimov. Asimov's family name derives from the first part ofозимый хлеб(ozímyj khleb), meaning 'winter grain' (specificallyrye) in which his great-great-great-grandfather dealt, with the Russian surname ending-ovadded.[14]Azimov is spelledАзимовin theCyrillic alphabet.[1]When the family arrived in the United States in 1923 and their name had to be spelled in theLatin alphabet, Asimov's father spelled it with an S, believing this letter to be pronounced like Z (as in German), and so it became Asimov.[1]This later inspired one of Asimov's short stories, "Spell My Name with an S".[15] Asimov refused early suggestions of using a more common name as a pseudonym, believing that its recognizability helped his career. After becoming famous, he often met readers who believed that "Isaac Asimov" was a distinctive pseudonym created by an author with a common name.[16] Asimov was born inPetrovichi,Russian SFSR,[17]on an unknown date between October 4, 1919, and January 2, 1920, inclusive. Asimov celebrated his birthday on January 2.[a] Asimov's parents wereRussian Jews, Anna Rachel (née Berman) and Judah Asimov, the son of a miller.[18]He was named Isaac after his mother's father, Isaac Berman.[19]Asimov wrote of his father, "My father, for all his education as anOrthodox Jew, was not Orthodox in his heart", noting that "he didn't recite themyriad prayers prescribed for every action, and he never made any attempt to teach them to me."[20] In 1921, Asimov and 16 other children in Petrovichi developeddouble pneumonia. Only Asimov survived.[21]He had two younger siblings: a sister, Marcia (born Manya;[22]June 17, 1922 – April 2, 2011),[23]and a brother,Stanley(July 25, 1929 – August 16, 1995), who would become vice-president ofNewsday.[24][25] Asimov's family travelled to the United States via Liverpool on theRMSBaltic, arriving on February 3, 1923[26]when he was three years old. His parents spokeYiddishand English to him; he never learnedRussian, his parents using it as a secret language "when they wanted to discuss something privately that my big ears were not to hear".[27][28]Growing up inBrooklyn,New York, Asimov taught himself to read at the age of five (and later taught his sister to read as well, enabling her to enter school in thesecond grade).[29]His mother got him intofirst gradea year early by claiming he was born on September 7, 1919.[30][31]In third grade he learned about the "error" and insisted on an official correction of the date to January 2.[32]He became anaturalizedU.S. citizen in 1928 at the age of eight.[33] After becoming established in the U.S., his parents owned a succession ofcandy storesin which everyone in the family was expected to work. The candy stores sold newspapers and magazines, which Asimov credited as a major influence in his lifelong love of the written word, as it presented him as a child with an unending supply of new reading material (including pulpscience fiction magazines)[34]that he could not have otherwise afforded. Asimov began reading science fiction at age nine, at the time that the genre was becoming more science-centered.[35]Asimov was also a frequent patron of theBrooklyn Public Libraryduring his formative years.[36] Asimov attended New York City public schools from age five, includingBoys High SchoolinBrooklyn.[37]Graduating at 15, he attended theCity College of New Yorkfor several days before accepting a scholarship atSeth Low Junior College. This was a branch ofColumbia UniversityinDowntown Brooklyndesigned to absorb some of the academically qualified Jewish andItalian-Americanstudents who applied to the more prestigiousColumbia Collegebut exceeded the unwritten ethnicadmission quotaswhich were common at the time. Originally azoologymajor, Asimov switched tochemistryafter his first semester because he disapproved of "dissecting an alley cat". After Seth Low Junior College closed in 1936, Asimov finished hisBachelor of Sciencedegree at Columbia's Morningside Heights campus (later theColumbia University School of General Studies)[38]in 1939. (In 1983, Dr. Robert Pollack (dean of Columbia College, 1982–1989) granted Asimov an honorary doctorate from Columbia College after requiring that Asimov place his foot in a bucket of water to pass the college's swimming requirement.[39]) After two rounds of rejections by medical schools, Asimov applied to the graduate program in chemistry at Columbia in 1939; initially he was rejected and then only accepted on a probationary basis.[40]He completed hisMaster of Artsdegree in chemistry in 1941 and earned aDoctor of Philosophydegree in chemistry in 1948.[e][45][46]During his chemistry studies, he also learned French and German.[47] From 1942 to 1945 duringWorld War II, between his masters and doctoral studies, Asimov worked as a civilian chemist at thePhiladelphia Navy Yard's Naval Air Experimental Station and lived in theWalnut Hillsection ofWest Philadelphia.[48][49]In September 1945, he was conscripted into the post-warU.S. Army; if he had not had his birth date corrected while at school, he would have been officially 26 years old and ineligible.[50]In 1946, a bureaucratic error caused his military allotment to be stopped, and he was removed from a task force days before it sailed to participate inOperation Crossroadsnuclear weapons tests atBikini Atoll.[51]He was promoted tocorporalon July 11 before receiving anhonorable dischargeon July 26, 1946.[52][f] After completing his doctorate and apostdoctoralyear withRobert Elderfield,[54]Asimov was offered the position ofassociate professorofbiochemistryat theBoston University School of Medicine. This was in large part due to his years-long correspondence withWilliam Boyd, a former associate professor of biochemistry at Boston University, who initially contacted Asimov to compliment him on his storyNightfall.[55]Upon receiving a promotion to professor ofimmunochemistry, Boyd reached out to Asimov, requesting him to be his replacement. The initial offer of professorship was withdrawn and Asimov was offered the position of instructor of biochemistry instead, which he accepted.[56]He began work in 1949 with a $5,000 salary[57](equivalent to $66,000 in 2024), maintaining this position for several years.[58]By 1952, however, he was making more money as a writer than from the university, and he eventually stopped doing research, confining his university role to lecturing students.[g]In 1955, he was promoted totenuredassociate professor. In December 1957, Asimov was dismissed from his teaching post, with effect from June 30, 1958, due to his lack of research. After a struggle over two years, he reached an agreement with the university that he would keep his title[60]and give the opening lecture each year for a biochemistry class.[61]On October 18, 1979, the university honored his writing by promoting him to full professor of biochemistry.[62]Asimov's personal papers from 1965 onward are archived at the university'sMugar Memorial Library, to which he donated them at the request of curator Howard Gotlieb.[63][64] In 1959, after a recommendation fromArthur Obermayer, Asimov's friend and a scientist on theU.S. missile defenseproject, Asimov was approached byDARPAto join Obermayer's team. Asimov declined on the grounds that his ability to write freely would be impaired should he receiveclassified information, but submitted a paper to DARPA titled "On Creativity"[65]containing ideas on how government-based science projects could encourage team members to think more creatively.[66] Asimov met his first wife, Gertrude Blugerman (May 16, 1917,Toronto, Canada[67]– October 17, 1990,Boston, U.S.[68]), on ablind dateon February 14, 1942, and married her on July 26.[69]The couple lived in an apartment inWest Philadelphiawhile Asimov was employed at the Philadelphia Navy Yard (where two of his co-workers wereL. Sprague de CampandRobert A. Heinlein). Gertrude returned to Brooklyn while he was in the Army, and they both lived there from July 1946 before moving toStuyvesant Town,Manhattan, in July 1948. They moved toBostonin May 1949, then to nearby suburbsSomervillein July 1949,Walthamin May 1951, and, finally,West Newtonin 1956.[70]They had two children, David (born 1951) and Robyn Joan (born 1955).[71]In 1970, they separated and Asimov moved back to New York, this time to theUpper West Sideof Manhattan where he lived for the rest of his life.[72]He began seeingJanet O. Jeppson, a psychiatrist and science-fiction writer, and married her on November 30, 1973,[73]two weeks after his divorce from Gertrude.[74] Asimov was aclaustrophile: he enjoyed small, enclosed spaces.[75][h]In the third volume of his autobiography, he recalls a childhood desire to own a magazine stand in aNew York City Subwaystation, within which he could enclose himself and listen to the rumble of passing trains while reading.[76] Asimov wasafraid of flying, doing so only twice: once in the course of his work at the Naval Air Experimental Station and once returning home fromOʻahuin 1946. Consequently, he seldom traveled great distances. This phobia influenced several of his fiction works, such as theWendell Urthmystery stories and theRobotnovels featuringElijah Baley. In his later years, Asimov found enjoyment traveling oncruise ships, beginning in 1972 when he viewed theApollo 17launch from acruise ship.[77]On several cruises, he was part of the entertainment program, giving science-themed talks aboard ships such as theQueen Elizabeth 2.[78]He sailed to England in June 1974 on theSSFrancefor a trip mostly devoted to lectures in London and Birmingham,[79]though he also found time to visitStonehenge[80]and Shakespeare's birthplace.[81] Asimov was ateetotaler.[83] He was an able public speaker and was regularly invited to give talks about science in his distinctNew York accent. He participated in manyscience fiction conventions, where he was friendly and approachable.[78]He patiently answered tens of thousands of questions and other mail with postcards and was pleased to give autographs. He was of medium height, 5 ft 9 in (1.75 m)[84]and stocky build. In his later years, he adopted a signature style of "mutton-chop"sideburns.[85][86]He took to wearingbolo tiesafter his wife Janet objected to his clip-on bow ties.[87]He never learned to swim or ride a bicycle, but did learn to drive a car after he moved to Boston. In his humor bookAsimov Laughs Again, he describes Boston driving as "anarchy on wheels".[88] Asimov's wide interests included his participation in later years in organizations devoted to thecomic operasofGilbert and Sullivan.[78]Many of his short stories mention or quote Gilbert and Sullivan.[89]He was a prominent member ofThe Baker Street Irregulars, the leadingSherlock Holmessociety,[78]for whom he wrote an essay arguing that Professor Moriarty's work "The Dynamics of An Asteroid" involved the willful destruction of an ancient, civilized planet. He was also a member of the male-only literary banqueting club theTrap Door Spiders, which served as the basis of his fictional group of mystery solvers, theBlack Widowers.[90]He later used his essay on Moriarty's work as the basis for a Black Widowers story, "The Ultimate Crime", which appeared inMore Tales of the Black Widowers.[91][92] In 1984, theAmerican Humanist Association(AHA) named him the Humanist of the Year. He was one of the signers of theHumanist Manifesto.[93]From 1985 until his death in 1992, he served as honorary president of the AHA, and was succeeded by his friend and fellow writerKurt Vonnegut. He was also a close friend ofStar TrekcreatorGene Roddenberry, and earned a screen credit as "special science consultant" onStar Trek: The Motion Picturefor his advice during production.[94] Asimov was a founding member of the Committee for the Scientific Investigation of Claims of the Paranormal, CSICOP (now theCommittee for Skeptical Inquiry)[95]and is listed in its Pantheon of Skeptics.[96]In a discussion withJames RandiatCSICon 2016regarding the founding of CSICOP,Kendrick Fraziersaid that Asimov was "a key figure in theSkeptical movementwho is less well known and appreciated today, but was very much in the public eye back then." He said that Asimov's being associated with CSICOP "gave it immense status and authority" in his eyes.[97]: 13:00 Asimov describedCarl Saganas one of only two people he ever met whose intellect surpassed his own. The other, he claimed, was thecomputer scientistandartificial intelligenceexpertMarvin Minsky.[98]Asimov was an on-and-off member and honorary vice president ofMensa International, albeit reluctantly;[99]he described some members of that organization as "brain-proud and aggressive about their IQs".[100][i] After his father died in 1969, Asimov annually contributed to a Judah Asimov Scholarship Fund atBrandeis University.[103] In 2006, he was named byCarnegie Corporation of New Yorkto the inaugural class of winners of theGreat Immigrants Award.[104] In 1977, Asimov had aheart attack. In December 1983, he hadtriple bypass surgeryat NYU Medical Center, during which he contractedHIVfrom ablood transfusion.[105]His HIV status was kept secret out of concern that theanti-AIDS prejudicemight extend to his family members.[106] He died in Manhattan on April 6, 1992, and was cremated.[107]The cause of death was reported as heart andkidney failure.[108][109][110]Ten years following Asimov's death, Janet and Robyn Asimov agreed that the HIV story should be made public; Janet revealed it in her edition of his autobiography,It's Been a Good Life.[105][110][106][111] [T]he only thing about myself that I consider to be severe enough to warrant psychoanalytic treatment is my compulsion to write ... That means that my idea of a pleasant time is to go up to my attic, sit at my electric typewriter (as I am doing right now), and bang away, watching the words take shape like magic before my eyes. Asimov's career can be divided into several periods. His early career, dominated by science fiction, began with short stories in 1939 and novels in 1950. This lasted until about 1958, all but ending after publication ofThe Naked Sun(1957). He began publishing nonfiction as co-author of a college-level textbook calledBiochemistry and Human Metabolism. Following the brief orbit of the first human-made satelliteSputnik Iby the USSR in 1957, he wrote more nonfiction, particularlypopular sciencebooks, and less science fiction. Over the next quarter-century, he wrote only four science fiction novels, and 120 nonfiction books. Starting in 1982, the second half of his science fiction career began with the publication ofFoundation's Edge. From then until his death, Asimov published several more sequels and prequels to his existing novels, tying them together in a way he had not originally anticipated, making a unified series. There are many inconsistencies in this unification, especially in his earlier stories.[113]DoubledayandHoughton Mifflinpublished about 60% of his work up to 1969, Asimov stating that "both represent a father image".[61] Asimov believed his most enduring contributions would be his "Three Laws of Robotics" and theFoundationseries.[114]TheOxford English Dictionarycredits his science fiction for introducing into the English language the words "robotics", "positronic" (an entirely fictional technology), and "psychohistory" (which is also used for adifferent studyon historical motivations). Asimov coined the term "robotics" without suspecting that it might be an original word; at the time, he believed it was simply the natural analogue of words such asmechanicsandhydraulics, but forrobots. Unlike his word "psychohistory", the word "robotics" continues in mainstream technical use with Asimov's original definition.Star Trek: The Next Generationfeaturedandroidswith "positronic brains" and the first-season episode "Datalore" called the positronic brain "Asimov's dream".[115] Asimov was so prolific and diverse in his writing that his books span all major categories of theDewey Decimal Classificationexcept for category 100,philosophyandpsychology.[116]However, he wrote several essays about psychology,[117]and forewords for the booksThe Humanist Way(1988) andIn Pursuit of Truth(1982),[118]which were classified in the 100s category, but none of his own books were classified in that category.[116] According toUNESCO'sIndex Translationum database, Asimov is the world's 24th-most-translated author.[119] No matter how various the subject matter I write on, I was a science-fiction writer first and it is as a science-fiction writer that I want to be identified. Asimov became a science fiction fan in 1929,[121]when he began reading thepulp magazinessold in his family's candy store.[122]At first his father forbade reading pulps until Asimov persuaded him that because thescience fiction magazineshad "Science" in the title, they must be educational.[123]At age 18 he joined theFuturiansscience fiction fan club, where he made friends who went on to become science fiction writers or editors.[124] Asimov began writing at the age of 11, imitatingThe Rover Boyswith eight chapters ofThe Greenville Chums at College. His father bought him a used typewriter at age 16.[61]His first published work was a humorous item on the birth of his brother for Boys High School's literary journal in 1934. In May 1937 he first thought of writing professionally, and began writing his first science fiction story, "Cosmic Corkscrew" (now lost), that year. On May 17, 1938, puzzled by a change in the schedule ofAstounding Science Fiction, Asimov visited its publisherStreet & Smith Publications. Inspired by the visit, he finished the story on June 19, 1938, and personally submitted it toAstoundingeditorJohn W. Campbelltwo days later. Campbell met with Asimov for more than an hour and promised to read the story himself. Two days later he received a detailed rejection letter.[121]This was the first of what became almost weekly meetings with the editor while Asimov lived in New York, until moving to Boston in 1949;[57]Campbell had a strong formative influence on Asimov and became a personal friend.[125] By the end of the month, Asimov completed a second story, "Stowaway". Campbell rejected it on July 22 but—in "the nicest possible letter you could imagine"—encouraged him to continue writing, promising that Asimov might sell his work after another year and a dozen stories of practice.[121]On October 21, 1938, he sold the third story he finished, "Marooned Off Vesta", toAmazing Stories, edited byRaymond A. Palmer, and it appeared in the March 1939 issue. Asimov was paid $64 (equivalent to $1,430 in 2024), or one cent a word.[61][126]Two more stories appeared that year, "The Weapon Too Dreadful to Use" in the MayAmazingand "Trends" in the JulyAstounding, the issue fans later selected as the start of theGolden Age of Science Fiction.[16]For 1940,ISFDBcatalogs seven stories in four different pulp magazines, including one inAstounding.[127]His earnings became enough to pay for his education, but not yet enough for him to become a full-time writer.[126] He later said that unlike other Golden Age writers Heinlein andA. E. van Vogt—also first published in 1939, and whose talent and stardom were immediately obvious—Asimov "(this is not false modesty) came up only gradually".[16]Through July 29, 1940, Asimov wrote 22 stories in 25 months, of which 13 were published; he wrote in 1972 that from that date he never wrote a science fiction story that was not published (except for two "special cases"[j]).[130]By 1941 Asimov was famous enough thatDonald Wollheimtold him that he purchased "The Secret Sense" for a new magazine only because of his name,[131]and the December 1940 issue ofAstonishing—featuring Asimov's name in bold—was the first magazine to basecover arton his work,[132]but Asimov later said that neither he nor anyone else—except perhaps Campbell—considered him better than an often published "third rater".[133] Based on a conversation with Campbell, Asimov wrote "Nightfall", his 32nd story, in March and April 1941, andAstoundingpublished it in September 1941. In 1968 theScience Fiction Writers of Americavoted "Nightfall" the best science fiction short story ever written.[108][133]InNightfall and Other StoriesAsimov wrote, "The writing of 'Nightfall' was a watershed in my professional career ... I was suddenly taken seriously and the world of science fiction became aware that I existed. As the years passed, in fact, it became evident that I had written a 'classic'."[134]"Nightfall" is an archetypal example ofsocial science fiction, a term he created to describe a new trend in the 1940s, led by authors including him and Heinlein, away fromgadgetsandspace operaand toward speculation about thehuman condition.[135] After writing "Victory Unintentional" in January and February 1942, Asimov did not write another story for a year. He expected to make chemistry his career, and was paid $2,600 annually at the Philadelphia Navy Yard, enough to marry his girlfriend; he did not expect to make much more from writing than the $1,788.50 he had earned from the 28 stories he had already sold over four years. Asimov left science fiction fandom and no longer read new magazines, and might have left the writing profession had not Heinlein and de Camp been his coworkers at the Navy Yard and previously sold stories continued to appear.[136] In 1942, Asimov published the first of hisFoundationstories—later collected in theFoundationtrilogy:Foundation(1951),Foundation and Empire(1952), andSecond Foundation(1953). The books describe the fall of a vastinterstellar empireand the establishment of its eventual successor. They feature his fictional science ofpsychohistory, whose theories could predict the future course of history according to dynamical laws regarding the statistical analysis of mass human actions.[137] Campbell raised his rate per word,Orson Wellespurchased rights to "Evidence", and anthologies reprinted his stories. By the end of the war Asimov was earning as a writer an amount equal to half of his Navy Yard salary, even after a raise, but Asimov still did not believe that writing could support him, his wife, and future children.[138][139] His"positronic" robot stories—many of which were collected inI, Robot(1950)—were begun at about the same time. They promulgated a set of rules ofethicsfor robots (seeThree Laws of Robotics) and intelligent machines that greatly influenced other writers and thinkers in their treatment of the subject. Asimov notes in his introduction to the short story collectionThe Complete Robot(1982) that he was largely inspired by the tendency of robots up to that time to fall consistently into aFrankensteinplot in which they destroyed their creators. TheRobotseries has led to film adaptations. With Asimov's collaboration, in about 1977,Harlan Ellisonwrote a screenplay ofI, Robotthat Asimov hoped would lead to "the first really adult, complex, worthwhilescience fiction filmever made". The screenplay has never been filmed and was eventually published in book form in 1994. The 2004 movieI, Robot, starringWill Smith, was based on an unrelated script byJeff VintartitledHardwired, with Asimov's ideas incorporated later after the rights to Asimov's title were acquired.[140](The title was not original to Asimov but had previously been used fora storybyEando Binder.) Also, one of Asimov's robot short stories, "The Bicentennial Man", was expanded into a novelThe Positronic Manby Asimov andRobert Silverberg, and this was adapted into the 1999 movieBicentennial Man, starringRobin Williams.[94] In 1966 theFoundationtrilogy won theHugo Awardfor the all-time best series of science fiction and fantasy novels,[141]and they along with theRobotseriesare his most famous science fiction. Besides movies, hisFoundationandRobotstories have inspired other derivative works of science fiction literature, many by well-known and established authors such asRoger MacBride Allen,Greg Bear,Gregory Benford,David Brin, andDonald Kingsbury. At least some of these appear to have been done with the blessing of, or at the request of, Asimov's widow,Janet Asimov.[142][143][144] In 1948, he also wrote a spoof chemistry article, "The Endochronic Properties of Resublimated Thiotimoline". At the time, Asimov was preparing his own doctoraldissertation, which would include an oral examination. Fearing a prejudicial reaction from his graduate school evaluation board atColumbia University, Asimov asked his editor that it be released under a pseudonym. When it nevertheless appeared under his own name, Asimov grew concerned that his doctoral examiners might think he wasn't taking science seriously. At the end of the examination, one evaluator turned to him, smiling, and said, "What can you tell us, Mr. Asimov, about the thermodynamic properties of the compound known as thiotimoline". Laughing hysterically with relief, Asimov had to be led out of the room. After a five-minute wait, he was summoned back into the room and congratulated as "Dr. Asimov".[145] Demand for science fiction greatly increased during the 1950s, making it possible for a genre author to write full-time.[146]In 1949, book publisherDoubleday's science fiction editor Walter I. Bradbury accepted Asimov's unpublished "Grow Old with Me" (40,000 words), but requested that it be extended to a full novel of 70,000 words. The book appeared under the Doubleday imprint in January 1950 with the title ofPebble in the Sky.[57]Doubleday published five more original science fiction novels by Asimov in the 1950s, along with the six juvenileLucky Starr novels, the latter under the pseudonym "Paul French".[147]Doubleday also published collections of Asimov's short stories, beginning withThe Martian Way and Other Storiesin 1955. The early 1950s also sawGnome Presspublish one collection of Asimov's positronic robot stories asI, Robotand hisFoundationstories and novelettes as the three books of theFoundation trilogy. More positronic robot stories were republished in book form asThe Rest of the Robots. Book publishers and the magazinesGalaxyandFantasy & Science Fictionended Asimov's dependence onAstounding. He later described the era as his "'mature' period". Asimov's "The Last Question" (1956), on the ability of humankind to cope with and potentially reverse the process ofentropy, was his personal favorite story.[148] In 1972, his stand-alone novelThe Gods Themselveswas published to general acclaim, winning Best Novel in theHugo,[149]Nebula,[149]andLocusAwards.[150] In December 1974, formerBeatlePaul McCartneyapproached Asimov and asked him to write the screenplay for a science-fiction movie musical. McCartney had a vague idea for the plot and a small scrap of dialogue, about a rock band whose members discover they are being impersonated by extraterrestrials. The band and their impostors would likely be played by McCartney's groupWings, then at the height of their career. Though not generally a fan of rock music, Asimov was intrigued by the idea and quickly produced a treatment outline of the story adhering to McCartney's overall idea but omitting McCartney's scrap of dialogue. McCartney rejected it, and the treatment now exists only in the Boston University archives.[151] Asimov said in 1969 that he had "the happiest of all my associations with science fiction magazines" withFantasy & Science Fiction; "I have no complaints aboutAstounding,Galaxy, or any of the rest, heaven knows, butF&SFhas become something special to me".[152]Beginning in 1977, Asimov lent his name toIsaac Asimov's Science Fiction Magazine(nowAsimov's Science Fiction) and wrote an editorial for each issue. There was also a short-livedAsimov's SF Adventure Magazineand a companionAsimov's Science Fiction Anthologyreprint series, published as magazines (in the same manner as the stablematesEllery Queen's Mystery Magazine's andAlfred Hitchcock's Mystery Magazine's "anthologies").[153] Due to pressure by fans on Asimov to write another book in hisFoundationseries,[58]he did so withFoundation's Edge(1982) andFoundation and Earth(1986), and then went back to before the original trilogy withPrelude to Foundation(1988) andForward the Foundation(1992), his last novel. He also helpedLeonard Nimoyfleshing out the premise of the science fiction comicPrimortals(1995–1997).[154] Just say I am one of the most versatile writers in the world, and the greatest popularizer of many subjects. Asimov and two colleagues published a textbook in 1949, with two more editions by 1969.[61]During the late 1950s and 1960s, Asimov substantially decreased his fiction output (he published only four adult novels between 1957'sThe Naked Sunand 1982'sFoundation's Edge, two of which were mysteries). He greatly increased his nonfiction production, writing mostly on science topics; the launch of Sputnik in 1957 engenderedpublic concern over a "science gap".[155]Asimov explained inThe Rest of the Robotsthat he had been unable to write substantial fiction since the summer of 1958, and observers understood him as saying that his fiction career had ended, or was permanently interrupted.[156]Asimov recalled in 1969 that "the United States went into a kind of tizzy, and so did I. I was overcome by the ardent desire to write popular science for an America that might be in great danger through its neglect of science, and a number of publishers got an equally ardent desire to publish popular science for the same reason".[157] Fantasy and Science Fictioninvited Asimov to continue his regular nonfiction column, begun in the now-folded bimonthly companion magazineVenture Science Fiction Magazine. The first of 399 monthlyF&SFcolumns appeared in November 1958 and they continued until his terminal illness.[158][k]These columns, periodically collected into books by Doubleday,[61]gave Asimov a reputation as a "Great Explainer" of science; he described them as his only popular science writing in which he never had to assume complete ignorance of the subjects on the part of his readers. The column was ostensibly dedicated to popular science but Asimov had complete editorial freedom, and wrote about contemporary social issues[citation needed]in essays such as "Thinking About Thinking"[159]and "Knock Plastic!".[160]In 1975 he wrote of these essays: "I get more pleasure out of them than out of any other writing assignment."[161] Asimov's first wide-ranging reference work,The Intelligent Man's Guide to Science(1960), was nominated for aNational Book Award, and in 1963 he won aHugo Award—his first—for his essays forF&SF.[162]The popularity of his science books and the income he derived from them allowed him to give up most academic responsibilities and become a full-timefreelance writer.[163]He encouraged other science fiction writers to write popular science, stating in 1967 that "the knowledgeable, skillful science writer is worth his weight in contracts", with "twice as much work as he can possibly handle".[164] The great variety of information covered in Asimov's writings promptedKurt Vonnegutto ask, "How does it feel to know everything?" Asimov replied that he only knew how it felt to have the 'reputation' of omniscience: "Uneasy".[165]Floyd C. Galesaid that "Asimov has a rare talent. He can make your mental mouth water over dry facts",[166]and "science fiction's loss has been science popularization's gain".[167]Asimov said that "Of all the writing I do, fiction, non-fiction, adult, or juvenile, theseF & SFarticles are by far the most fun".[168]He regretted, however, that he had less time for fiction—causing dissatisfied readers to send him letters of complaint—stating in 1969 that "In the last ten years, I've done a couple of novels, some collections, a dozen or so stories, but that'snothing".[157] In his essay "To Tell a Chemist" (1965), Asimov proposed a simpleshibbolethfor distinguishing chemists from non-chemists: ask the person to read the word "unionized". Chemists, he noted, will readun-ionized(electrically neutral), while non-chemists will readunion-ized(belonging to a trade union). Asimov coined the term "robotics" in his 1941 story "Liar!",[169]though he later remarked that he believed then that he was merely using an existing word, as he stated inGold("The Robot Chronicles"). While acknowledging the Oxford Dictionary reference, he incorrectly states that the word was first printed about one third of the way down the first column of page 100 in the March 1942 issue ofAstounding Science Fiction– the printing of his short story "Runaround".[170][171] In the same story, Asimov also coined the term "positronic" (the counterpart to "electronic" forpositrons).[172] Asimov coined the term "psychohistory" in hisFoundationstories to name a fictional branch of science which combineshistory,sociology, andmathematical statisticsto make general predictions about the future behavior of very large groups of people, such as theGalactic Empire. Asimov said later that he should have called it psychosociology. It was first introduced in the five short stories (1942–1944) which would later be collected as the 1951fix-upnovelFoundation.[173]Somewhat later, the term "psychohistory" was applied by others to research of the effects of psychology on history.[174][175] In addition to his interest in science, Asimov was interested in history. Starting in the 1960s, he wrote 14 popular history books, includingThe Greeks: A Great Adventure(1965),[176]The Roman Republic(1966),[177]The Roman Empire(1967),[178]The Egyptians(1967)[179]The Near East: 10,000 Years of History(1968),[180]andAsimov's Chronology of the World(1991).[181] He publishedAsimov's Guide to the Biblein two volumes—covering theOld Testamentin 1967 and theNew Testamentin 1969—and then combined them into one 1,300-page volume in 1981. Complete with maps and tables, the guide goes through the books of the Bible in order, explaining the history of each one and the political influences that affected it, as well as biographical information about the important characters. His interest in literature manifested itself in several annotations of literary works, includingAsimov's Guide to Shakespeare(1970),[l]Asimov's Annotated Don Juan(1972),Asimov's Annotated Paradise Lost(1974), andThe Annotated Gulliver's Travels(1980).[182] Asimov was also a noted mystery author and a frequent contributor toEllery Queen's Mystery Magazine. He began by writing science fiction mysteries such as his Wendell Urth stories, but soon moved on to writing "pure" mysteries. He published two full-length mystery novels, and wrote 66 stories about theBlack Widowers, a group of men who met monthly for dinner, conversation, and a puzzle. He got the idea for the Widowers from his own association in a stag group called the Trap Door Spiders, and all of the main characters (with the exception of the waiter, Henry, who he admitted resembled Wodehouse's Jeeves) were modeled after his closest friends.[183]A parody of the Black Widowers, "An Evening with the White Divorcés," was written by author, critic, and librarian Jon L. Breen.[184]Asimov joked, "all I can do ... is to wait until I catch him in a dark alley, someday."[185] Toward the end of his life, Asimov published a series of collections oflimericks, mostly written by himself, starting withLecherous Limericks, which appeared in 1975.Limericks: Too Gross, whose title displays Asimov's love ofpuns, contains 144 limericks by Asimov and an equal number byJohn Ciardi. He even created a slim volume ofSherlockianlimericks. Asimov featuredYiddishhumor inAzazel, The Two Centimeter Demon. The two main characters, both Jewish, talk over dinner, or lunch, or breakfast, about anecdotes of "George" and his friend Azazel. Asimov'sTreasury of Humoris both a working joke book and a treatise propounding his views onhumor theory. According to Asimov, the most essential element of humor is an abrupt change in point of view, one that suddenly shifts focus from the important to the trivial, or from the sublime to the ridiculous.[186][187] Particularly in his later years, Asimov to some extent cultivated an image of himself as an amiable lecher. In 1971, as a response to the popularity of sexual guidebooks such asThe Sensuous Woman(by "J") andThe Sensuous Man(by "M"), Asimov publishedThe Sensuous Dirty Old Manunder the byline "Dr. 'A'"[188](although his full name was printed on the paperback edition, first published 1972). However, by 2016, Asimov's habit of groping women was seen assexual harassmentand came under criticism, and was cited as an early example of inappropriate behavior that can occur at science fiction conventions.[189] Asimov publishedthree volumes of autobiography.In Memory Yet Green(1979)[190]andIn Joy Still Felt(1980)[191]cover his life up to 1978. The third volume,I. Asimov: A Memoir(1994),[192]covered his whole life (rather than following on from where the second volume left off). The epilogue was written by his widowJanet Asimovafter his death. The book won aHugo Awardin 1995.[193]Janet Asimov editedIt's Been a Good Life(2002),[194]a condensed version of his three autobiographies. He also published three volumes of retrospectives of his writing,Opus 100(1969),[195]Opus 200(1979),[196]andOpus 300(1984).[197] In 1987, the Asimovs co-wroteHow to Enjoy Writing: A Book of Aid and Comfort. In it they offer advice on how to maintain a positive attitude and stay productive when dealing with discouragement, distractions, rejection, and thick-headed editors. The book includes many quotations, essays, anecdotes, and husband-wife dialogues about the ups and downs of being an author.[198][199] Asimov andStar TrekcreatorGene Roddenberrydeveloped a unique relationship duringStar Trek's initial launch in the late 1960s. Asimov wrote a critical essay onStar Trek's scientific accuracy forTV Guidemagazine. Roddenberry retorted respectfully with a personal letter explaining the limitations of accuracy when writing a weekly series. Asimov corrected himself with a follow-up essay toTV Guideclaiming that despite its inaccuracies,Star Trekwas a fresh and intellectually challengingscience fiction televisionshow. The two remained friends to the point where Asimov even served as an advisor on a number ofStar Trekprojects.[200] In 1973, Asimov published a proposal forcalendar reform, called the World Season Calendar. It divides the year into four seasons (named A–D) of 13 weeks (91 days) each. This allows days to be named, e.g., "D-73" instead of December 1 (due to December 1 being the 73rd day of the 4th quarter). An extra 'year day' is added for a total of 365 days.[201] Asimov won more than a dozen annual awards for particular works of science fiction and a half-dozen lifetime awards.[202]He also received 14honorary doctoratedegrees from universities.[203] I have an informal style, which means I tend to use short words and simple sentence structure, to say nothing of occasional colloquialisms. This grates on people who like things that are poetic, weighty, complex, and, above all, obscure. On the other hand, the informal style pleases people who enjoy the sensation of reading an essay without being aware that they are reading and of feeling that ideas are flowing from the writer's brain into their own without mental friction. Asimov was his own secretary, typist,indexer,proofreader, andliterary agent.[61]He wrote a typed first draft composed at the keyboard at 90 words per minute; he imagined an ending first, then a beginning, then "let everything in-between work itself out as I come to it". (Asimov used anoutlineonly once, later describing it as "like trying to play the piano from inside a straitjacket".) After correcting a draft by hand, he retyped the document as the final copy and only made one revision with minor editor-requested changes; aword processordid not save him much time, Asimov said, because 95% of the first draft was unchanged.[148][234][235] After disliking making multiple revisions of "Black Friar of the Flame", Asimov refused to make major, second, or non-editorial revisions ("like chewing used gum"), stating that "too large a revision, or too many revisions, indicate that the piece of writing is a failure. In the time it would take to salvage such a failure, I could write a new piece altogether and have infinitely more fun in the process". He submitted "failures" to another editor.[148][234] Asimov's fiction style is extremely unornamented. In 1980, science fiction scholarJames Gunnwrote ofI, Robot: Except for two stories—"Liar!" and "Evidence"—they are not stories in which character plays a significant part. Virtually all plot develops in conversation with little if any action. Nor is there a great deal of local color or description of any kind. The dialogue is, at best, functional and the style is, at best, transparent. ... . The robot stories and, as a matter of fact, almost all Asimov fiction—play themselves on a relatively bare stage.[236] Asimov addressed such criticism in 1989 at the beginning ofNemesis: I made up my mind long ago to follow one cardinal rule in all my writing—to be 'clear'. I have given up all thought of writing poetically or symbolically or experimentally, or in any of the other modes that might (if I were good enough) get me a Pulitzer prize. I would write merely clearly and in this way establish a warm relationship between myself and my readers, and the professional critics—Well, they can do whatever they wish.[237] Gunn cited examples of a more complex style, such as the climax of "Liar!". Sharply drawn characters occur at key junctures of his storylines:Susan Calvinin "Liar!" and "Evidence",Arkady DarellinSecond Foundation, Elijah Baley inThe Caves of Steel, andHari Seldonin theFoundationprequels. Other than books by Gunn and Joseph Patrouch, there is relatively little literary criticism on Asimov (particularly when compared to the sheer volume of his output). Cowart and Wymer'sDictionary of Literary Biography(1981) gives a possible reason: His words do not easily lend themselves to traditionalliterary criticismbecause he has the habit of centering his fiction on plot and clearly stating to his reader, in rather direct terms, what is happening in his stories and why it is happening. In fact, most of the dialogue in an Asimov story, and particularly in the Foundation trilogy, is devoted to such exposition. Stories that clearly state what they mean in unambiguous language are the most difficult for a scholar to deal with because there is little to be interpreted.[238] Gunn's and Patrouch's studies of Asimov both state that a clear, direct prose style is still a style. Gunn's 1982 book comments in detail on each of Asimov's novels. He does not praise all of Asimov's fiction (nor does Patrouch), but calls some passages inThe Caves of Steel"reminiscent ofProust". When discussing how that novel depicts night falling over futuristic New York City, Gunn says that Asimov's prose "need not be ashamed anywhere in literary society".[239] Although he prided himself on his unornamented prose style (for which he creditedClifford D. Simakas an early influence[16][240]), and said in 1973 that his style had not changed,[148]Asimov also enjoyed giving his longer stories complicatednarrative structures, often by arranging chapters in nonchronologicalways. Some readers have been put off by this, complaining that thenonlinearityis not worth the trouble and adversely affects the clarity of the story. For example, the first third ofThe Gods Themselvesbegins with Chapter 6, then backtracks to fill in earlier material.[241](John Campbell advised Asimov to begin his stories as late in the plot as possible. This advice helped Asimov create "Reason", one of the earlyRobotstories). Patrouch found that the interwoven and nested flashbacks ofThe Currents of Spacedid serious harm to that novel, to such an extent that only a "dyed-in-the-kyrt[242]Asimov fan" could enjoy it. In his later novelNemesisone group of characters lives in the "present" and another group starts in the "past", beginning 15 years earlier and gradually moving toward the time of the first group. Asimov once explained that his reluctance to write about aliens came from an incident early in his career whenAstounding's editorJohn Campbellrejected one of his science fiction stories because the alien characters were portrayed as superior to the humans. The nature of the rejection led him to believe that Campbell may have based his bias towards humans in stories on a real-world racial bias. Unwilling to write only weak alien races, and concerned that a confrontation would jeopardize his and Campbell's friendship, he decided he would not write about aliens at all.[243]Nevertheless, in response to these criticisms, he wroteThe Gods Themselves, which contains aliens and alien sex. The book won theNebula Award for Best Novelin 1972,[213]and theHugo Award for Best Novelin 1973.[213]Asimov said that of all his writings, he was most proud of the middle section ofThe Gods Themselves, the part that deals with those themes.[244] In theHugo Award–winning novelette "Gold", Asimov describes an author, based on himself, who has one of his books (The Gods Themselves) adapted into a "compu-drama", essentiallyphoto-realisticcomputer animation. The director criticizes the fictionalized Asimov ("Gregory Laborian") for having an extremely nonvisual style, making it difficult to adapt his work, and the author explains that he relies on ideas and dialogue rather than description to get his points across.[245] In the early days of science fiction some authors and critics felt that the romantic elements were inappropriate in science fiction stories, which were supposedly to be focused on science and technology. Isaac Asimov was a supporter of this point of view, expressed in his 1938-1939 letters toAstounding, where he described such elements as "mush" and "slop". To his dismay, these letters were met with a strong opposition.[246] Asimov attributed the lack of romance and sex in his fiction to the "early imprinting" from starting his writing career when he had never been on a date and "didn't know anything about girls".[126]He was sometimes criticized for the general absence of sex (and ofextraterrestrial life) in his science fiction. He claimed he wroteThe Gods Themselves(1972) to respond to these criticisms,[247]which often came fromNew Wave science fiction(and often British) writers. The second part (of three) of the novel is set on an alien world with three sexes, and the sexual behavior of these creatures is extensively depicted. There is a perennial question among readers as to whether the views contained in a story reflect the views of the author. The answer is, "Not necessarily—" And yet one ought to add another short phrase "—but usually." Asimov was anatheist, and ahumanist.[118]He did not oppose religious conviction in others, but he frequently railed againstsuperstitiousandpseudoscientificbeliefs that tried to pass themselves off as genuine science. During his childhood, his parents observed the traditions ofOrthodox Judaismless stringently than they had in Petrovichi; they did not force their beliefs upon young Isaac, and he grew up without strong religious influences, coming to believe that theTorahrepresentedHebrew mythologyin the same way that theIliadrecordedGreek mythology.[249]When he was 13, he chose not to have abar mitzvah.[250]As his booksTreasury of HumorandAsimov Laughs Againrecord, Asimov was willing to tell jokes involving God,Satan, theGarden of Eden,Jerusalem, and other religious topics, expressing the viewpoint that a good joke can do more to provoke thought than hours of philosophical discussion.[186][187] For a brief while, his father worked in the localsynagogueto enjoy the familiar surroundings and, as Isaac put it, "shine as a learned scholar"[251]versed in the sacred writings. This scholarship was a seed for his later authorship and publication ofAsimov's Guide to the Bible, an analysis of the historic foundations for the Old and New Testaments. For many years, Asimov called himself an atheist; he considered the term somewhat inadequate, as it described what he did not believe rather than what he did. Eventually, he described himself as a "humanist" and considered that term more practical. Asimov continued to identify himself as asecular Jew, as stated in his introduction toJack Dann's anthology of Jewish science fiction,Wandering Stars: "I attend no services and follow no ritual and have never undergone that curious puberty rite, the Bar Mitzvah. It doesn't matter. I am Jewish."[252] When asked in an interview in 1982 if he was an atheist, Asimov replied, I am an atheist, out and out. It took me a long time to say it. I've been an atheist for years and years, but somehow I felt it was intellectually unrespectable to say one was an atheist, because it assumed knowledge that one didn't have. Somehow it was better to say one was a humanist or an agnostic. I finally decided that I'm a creature of emotion as well as of reason. Emotionally I am an atheist. I don't have the evidence to prove that God doesn't exist, but I so strongly suspect he doesn't that I don't want to waste my time.[253] Likewise, he said about religious education: "I would not be satisfied to have my kids choose to be religious without trying to argue them out of it, just as I would not be satisfied to have them decide to smoke regularly or engage in any other practice I consider detrimental to mind or body."[254] In his last volume of autobiography, Asimov wrote, If I were not an atheist, I would believe in a God who would choose to save people on the basis of the totality of their lives and not the pattern of their words. I think he would prefer an honest and righteous atheist to a TV preacher whose every word is God, God, God, and whose every deed is foul, foul, foul.[255] The same memoir states his belief thatHellis "the drooling dream of asadist" crudely affixed to an all-merciful God; if even human governments were willing to curtail cruel and unusual punishments, wondered Asimov, why would punishment in the afterlife not be restricted to a limited term? Asimov rejected the idea that a human belief or action could merit infinite punishment. If an afterlife existed, he claimed, the longest and most severe punishment would be reserved for those who "slandered God by inventing Hell".[256] Asimov said about using religious motifs in his writing: I tend to ignore religion in my own stories altogether, except when I absolutely have to have it. ... and, whenever I bring in a religious motif, that religion is bound to seem vaguely Christian because that is the only religion I know anything about, even though it is not mine. An unsympathetic reader might think that I am "burlesquing" Christianity, but I am not. Then too, it is impossible to write science fiction and really ignore religion.[257] Asimov became a staunch supporter of theDemocratic Partyduring theNew Deal, and thereafter remained a politicalliberal. He was a vocal opponent of theVietnam Warin the 1960s and in a television interview during the early 1970s he publicly endorsedGeorge McGovern.[258]He was unhappy about what he considered an "irrationalist" viewpoint taken by many radical political activists from the late 1960s and onwards. In his second volume of autobiography,In Joy Still Felt, Asimov recalled meeting the counterculture figureAbbie Hoffman. Asimov's impression was that the1960s' countercultureheroes had ridden an emotional wave which, in the end, left them stranded in a "no-man's land of the spirit" from which he wondered if they would ever return.[259] Asimov vehemently opposedRichard Nixon, considering him "a crook and a liar". He closely followedWatergate, and was pleased when the president was forced to resign. Asimov was dismayed over the pardon extended to Nixon by his successorGerald Ford: "I was not impressed by the argument that it has spared the nation an ordeal. To my way of thinking, the ordeal was necessary to make certain it would never happen again."[260] After Asimov's name appeared in the mid-1960s on a list of people theCommunist Party USA"considered amenable" to its goals, theFBIinvestigated him. Because of his academic background, the bureau briefly considered Asimov as a possible candidate for known Soviet spy ROBPROF, but found nothing suspicious in his life or background.[261] Asimov appeared to hold an equivocal attitude towardsIsrael. In his first autobiography, he indicates his support for the safety of Israel, though insisting that he was not aZionist.[262]In his third autobiography, Asimov stated his opposition to the creation of aJewish state, on the grounds that he was opposed to havingnation-statesin general, and supported the notion of a single humanity. Asimov especially worried about the safety of Israel given that it had been created among Muslim neighbors "who will never forgive, never forget and never go away", and said that Jews had merely created for themselves another "Jewish ghetto".[n] Asimov believed that "sciencefiction ... serve[s] the good of humanity".[164]He considered himself a feminist even beforewomen's liberationbecame a widespread movement; he argued that the issue ofwomen's rightswas closely connected to that of population control.[263]Furthermore, he believed thathomosexualitymust be considered a "moral right" on population grounds, as must all consenting adult sexual activity that does not lead to reproduction.[263]He issued many appeals forpopulation control, reflecting a perspective articulated by people fromThomas MalthusthroughPaul R. Ehrlich.[264] In a 1988 interview byBill Moyers, Asimov proposedcomputer-aided learning, where people would use computers to find information on subjects in which they were interested.[265]He thought this would make learning more interesting, since people would have the freedom to choose what to learn, and would help spread knowledge around the world. Also, theone-to-onemodel would let students learn at their own pace.[266]Asimov thought that people would live in space by 2019.[267] In 1983 Asimov wrote:[268] Computerization will undoubtedly continue onward inevitably... This means that a vast change in the nature of education must take place, and entire populations must be made "computer-literate" and must be taught to deal with a "high-tech" world. He continues on education: Education, which must be revolutionized in the new world, will be revolutionized by the very agency that requires the revolution — the computer. Schools will undoubtedly still exist, but a good schoolteacher can do no better than to inspire curiosity which an interested student can then satisfy at home at the console of his computer outlet. There will be an opportunity finally for every youngster, and indeed, every person, to learn what he or she wants to learn, in his or her own time, at his or her own speed, in his or her own way. Education will become fun because it will bubble up from within and not be forced in from without. Asimov would often fondle, kiss and pinch women at conventions and elsewhere without regard for their consent. According toAlec Nevala-Lee, author of an Asimov biography[269]and writer on the history of science fiction, he often defended himself by saying that far from showing objections, these women cooperated.[270]In a 1971 satirical piece,The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched."[270] According to Nevala-Lee, however, "many of these encounters were clearly nonconsensual."[270]He wrote that Asimov's behaviour, as a leading science-fiction author and personality, contributed to an undesirable atmosphere for women in the male-dominated science fiction community. In support of this, he quoted some of Asimov's contemporary fellow-authors such asJudith Merril,Harlan EllisonandFrederik Pohl, as well as editors such as Timothy Seldes.[270]Additional specific incidents were reported by other people includingEdward L. Ferman, long-time editor ofThe Magazine of Fantasy & Science Fiction, who wrote "...instead of shaking my date's hand, he shook herleft breast".[271] Asimov's defense of civil applications ofnuclear power, even after theThree Mile Islandnuclear power plant incident, damaged his relations with some of his fellow liberals. In a letter reprinted inYours, Isaac Asimov,[263]he states that although he would prefer living in "no danger whatsoever" to living near a nuclear reactor, he would still prefer a home near a nuclear power plant to a slum onLove Canalor near "aUnion Carbideplant producingmethyl isocyanate", the latter being a reference to theBhopal disaster.[263] In the closing years of his life, Asimov blamed the deterioration of the quality of life that he perceived in New York City on the shrinking tax base caused by themiddle-class flightto the suburbs, though he continued to support high taxes on the middle class to pay for social programs. His last nonfiction book,Our Angry Earth(1991, co-written with his long-time friend, science fiction authorFrederik Pohl), deals with elements of the environmental crisis such asoverpopulation,oil dependence,war,global warming, and the destruction of theozone layer.[272][273]In response to being presented byBill Moyerswith the question "What do you see happening to the idea of dignity to human species if this population growth continues at its present rate?", Asimov responded: It's going to destroy it all ... if you have 20 people in the apartment and two bathrooms, no matter how much every person believes in freedom of the bathroom, there is no such thing. You have to set up, you have to set up times for each person, you have to bang at the door, aren't you through yet, and so on. And in the same way, democracy cannot survive overpopulation. Human dignity cannot survive it. Convenience and decency cannot survive it. As you put more and more people onto the world, the value of life not only declines, but it disappears.[274] Asimov enjoyed the writings ofJ. R. R. Tolkien, and usedThe Lord of the Ringsas a plot point in aBlack Widowersstory, titledNothing like Murder.[275]In the essay "All or Nothing" (forThe Magazine of Fantasy and Science Fiction,Jan 1981), Asimov said that he admired Tolkien and that he had readThe Lord of the Ringsfive times. (The feelings were mutual, with Tolkien saying that he had enjoyed Asimov's science fiction.[276]This would make Asimov an exception to Tolkien's earlier claim[276]that he rarely found "any modern books" that were interesting to him.) He acknowledged other writers as superior to himself in talent, saying ofHarlan Ellison, "He is (in my opinion) one of the best writers in the world, far more skilled at the art than I am."[277]Asimov disapproved of theNew Wave's growing influence, stating in 1967 "I want science fiction. I think science fiction isn't really science fiction if it lacks science. And I think the better and truer the science, the better and truer the science fiction".[164] The feelings of friendship and respect between Asimov andArthur C. Clarkewere demonstrated by the so-called "Clarke–Asimov Treaty ofPark Avenue", negotiated as they shared a cab in New York. This stated that Asimov was required to insist that Clarke was the best science fiction writer in the world (reserving second-best for himself), while Clarke was required to insist that Asimov was the best science writer in the world (reserving second-best for himself). Thus, the dedication in Clarke's bookReport on Planet Three(1972) reads: "In accordance with the terms of the Clarke–Asimov treaty, the second-best science writer dedicates this book to the second-best science-fiction writer." In 1980, Asimov wrote a highly critical review ofGeorge Orwell's1984.[278]Though dismissive of his attacks, James Machell has stated that they "are easier to understand when you consider that Asimov viewed 1984 as dangerous literature. He opines that if communism were to spread across the globe, it would come in a completely different form to the one in 1984, and by looking to Orwell as an authority on totalitarianism, 'we will be defending ourselves against assaults from the wrong direction and we will lose'."[279] Asimov became a fan of mystery stories at the same time as science fiction. He preferred to read the former because "I read every [science fiction] story keenly aware that it might be worse than mine, in which case I had no patience with it, or that it might be better, in which case I felt miserable".[148]Asimov wrote "I make no secret of the fact that in my mysteries I useAgatha Christieas my model. In my opinion, her mysteries are the best ever written, far better than the Sherlock Holmes stories, andHercule Poirotis the best detective fiction has seen. Why should I not use as my model what I consider the best?"[280]He enjoyed Sherlock Holmes, but consideredArthur Conan Doyleto be "a slapdash and sloppy writer."[281] Asimov also enjoyed humorous stories, particularly those ofP. G. Wodehouse.[282] In non-fiction writing, Asimov particularly admired the writing style ofMartin Gardner, and tried to emulate it in his own science books. On meeting Gardner for the first time in 1965, Asimov told him this, to which Gardner answered that he had based his own style on Asimov's.[283] Paul Krugman, holder of aNobel Prize in Economics, stated Asimov's concept ofpsychohistoryinspired him to become an economist.[284] John Jenkins, who has reviewed the vast majority of Asimov's written output, once observed, "It has been pointed out that most science fiction writers since the 1950s have been affected by Asimov, either modeling their style on his or deliberately avoiding anything like his style."[285]Along with such figures asBertrand RussellandKarl Popper, Asimov left his mark as one of the most distinguishedinterdisciplinariansof the 20th century.[286]"Few individuals", writesJames L. Christian, "understood better than Isaac Asimov whatsynopticthinking is all about. His almost 500 books—which he wrote as a specialist, a knowledgeable authority, or just an excited layman—range over almost all conceivable subjects: the sciences, history, literature, religion, and of course, science fiction."[287] In 2024,DARPAnamed one of its programs after Asimov, inspired by his “Three Laws of Robotics.” The program , Autonomy Standards and Ideals with Military Operational Values (ASIMOV), aims to develop benchmarks objectively and quantitatively assessing the ethical challenges and readiness of utilizing autonomous systems for military operations.[288] Over a space of 40 years, I published an average of 1,000 words a day. Over the space of the second 20 years, I published an average of 1,700 words a day. Depending on the counting convention used,[290]and including all titles, charts, and edited collections, there may be currently over 500 books in Asimov's bibliography—as well as his individual short stories, individual essays, and criticism. For his 100th, 200th, and 300th books (based on his personal count), Asimov publishedOpus 100(1969),Opus 200(1979), andOpus 300(1984), celebrating his writing.[195][196][197]An extensive bibliography of Isaac Asimov's works has been compiled by Ed Seiler.[291]His book writing rate was analysed, showing that he wrote faster as he wrote more.[292] An online exhibit inWest Virginia University Libraries' virtually complete Asimov Collection displays features, visuals, and descriptions of some of his more than 600 books, games, audio recordings, videos, and wall charts. Many first, rare, and autographed editions are in the Libraries' Rare Book Room. Book jackets and autographs are presented online along with descriptions and images of children's books, science fiction art, multimedia, and other materials in the collection.[293][294] TheRobotseries was originally separate from theFoundationseries. The Galactic Empire novels were published as independent stories, set earlier in the same future asFoundation. Later in life, Asimov synthesized theRobotseries into a single coherent "history" that appeared in the extension of theFoundationseries.[295] All of these books were published byDoubleday & Co, except the original Foundation trilogy which was originally published by Gnome Books before being bought and republished by Doubleday. All published byDoubleday & Co All published by Walker & Company Novels marked with an asterisk (*) have minor connections toFoundationuniverse. The following books collected essays which were originally published as monthly columns inThe Magazine of Fantasy and Science Fictionand collected byDoubleday & Co All published by Doubleday All published byHoughton Mifflinexcept where otherwise stated
https://en.wikipedia.org/wiki/Isaac_Asimov
Intrait theory, theBig Five personality traits(sometimes known as thefive-factor model of personalityorOCEANorCANOEmodels) are a group of five characteristics used to studypersonality:[1] The Big Five traits did not arise from studying an existingtheoryof personality, but rather, they were anempiricalfinding in earlylexicalstudies that English personality-descriptive adjectives clustered together underfactor analysisinto five unique factors.[3][4]The factor analysis indicates that these five factors can be measured, but further studies have suggested revisions and critiques of the model. Cross-language studies have found a sixthHonesty-Humility factor, suggesting a replacement by theHEXACO model of personality structure.[5]A study of short-form constructs found that the agreeableness and openness constructs were ill-defined in a larger population, suggesting that these traits should be dropped and replaced by more specific dimensions. In addition, the labels such as "neuroticism" are ill-fitting, and the traits are more properly thought of as unnamed dimensions, "Factor A", "Factor B", and so on.[6] Despite these issues with its formulation, the five-factor approach has been enthusiastically and internationally embraced, becoming central to much of contemporary personality research. Many subsequent factor analyses, variously formulated and expressed in a variety of languages, have repeatedly reported the finding of five largely similar factors. The five-factor approach has been portrayed as a fruitful, scientific achievement―a fundamental advance in the understanding of human personality. Some have claimed that the five factors of personality are "an empirical fact, like the fact that there are seven continents on earth and eight American Presidents from Virginia".[7]Others such asJack Blockhave expressed concerns over the uncritical acceptance of the approach.[8] William McDougall, writing in 1932, put forward a conjecture observing that "five distinguishable but separable factors" could be identified when looking at personality. His suggestions, "intellect, character, temperament, disposition and temper", have been seen as "anticipating" the adoption of the Big Five model in subsequent years.[9]The model was built on understanding the relationship between personality andacademic behaviour.[10]It was defined by several independent sets of researchers who analysed words describing people's behaviour.[9]These researchers first studied relationships between many words related to personality traits. They made lists of these words shorter by 5–10 times and then usedfactor analysisto group the remaining traits (with data mostly based upon people's estimations, in self-report questionnaires and peer ratings) to find the basic factors of personality.[11][12][13][14][15] The initial model was advanced in 1958 by Ernest Tupes and Raymond Christal, research psychologists at theLackland Air Force Basein Texas, but failed to reach scholars and scientists until the 1980s. In 1990, J.M. Digman advanced his five-factor model of personality, whichLewis Goldbergput at the highest organised level.[16]These five overarching domains have been found to contain most known personality traits and are assumed to represent the basic structure behind them all.[17] At least four sets of researchers have worked independently for decades toreflect personality traits in languageand have mainly identified the same five factors: Tupes and Christal were first, followed by Goldberg at theOregon Research Institute,[18][19][20][21][22]Cattellat the University of Illinois,[13][23][24][25]and finallyCostaandMcCrae.[26][27][28][29]These four sets of researchers used somewhat different methods in finding the five traits, making the sets of five factors have varying names and meanings. However, all have been found to be strongly correlated with their corresponding factors.[30][31][32][33][34]Studies indicate that the Big Five traits are not nearly as powerful in predicting and explaining actual behaviour as the more numerousfacetsor primary traits.[35][36] Each of the Big Five personality traits contains two separate, but correlated, aspects reflecting a level of personality below the broad domains but above the many facet scales also making up part of the Big Five.[37]The aspects are labelled as follows: Volatility and Withdrawal for Neuroticism; Enthusiasm and Assertiveness for Extraversion; Intellect and Openness for Openness to Experience; Industriousness and Orderliness for Conscientiousness; and Compassion and Politeness for Agreeableness.[37] In 1884, British scientistSir Francis Galtonbecame the first person known to consider deriving a comprehensive taxonomy of human personality traits by sampling language.[11]The idea that this may be possible is known as thelexical hypothesis. In 1936, American psychologistsGordon AllportofHarvard Universityand Henry Odbert ofDartmouth Collegeimplemented Galton's hypothesis. They organised for three anonymous people to categorise adjectives fromWebster's New International Dictionaryand a list of common slang words. The result was a list of 4504 adjectives they believed were descriptive of observable and relatively permanent traits.[38] In 1943,Raymond Cattellof Harvard University took Allport and Odbert's list and reduced this to a list of roughly 160 terms by eliminating words with very similar meanings. To these, he added terms from 22 other psychological categories, and additional "interest" and "abilities" terms. This resulted in a list of 171 traits. From this he used factor analysis to derive 60 "personality clusters or syndromes" and an additional 7 minor clusters.[39]Cattell then narrowed this down to 35 terms, and later added a 36th factor in the form of an IQ measure. Throughfactor analysisfrom 1945 to 1948, he created 11 or 12 factor solutions.[40][41][42] In 1947,Hans EysenckofUniversity College Londonpublished his bookDimensions of Personality. He posited that the two most important personality dimensions were "Extraversion" and "Neuroticism", a term that he coined.[43] In July 1949,Donald Fiskeof theUniversity of Chicagoused 22 terms either adapted from Cattell's 1947 study, and through surveys of male university students and statistics derived five factors: "Social Adaptability", "Emotional Control", "Conformity", "Inquiring Intellect", and "Confident Self-expression".[44]In the same year, Cattell, with Maurice Tatsuoka and Herbert Eber, found 4 additional factors, which they believed consisted of information that could only be provided through self-rating. With this understanding, they created the sixteen factor16PF Questionnaire.[45][46][47][48][49] In 1953, John W French ofEducational Testing Servicepublished an extensive meta-analysis of personality trait factor studies.[50] In 1957, Ernest Tupes of theUnited States Air Forceundertook a personality trait study of US Air Force officers. Each was rated by their peers using Cattell's 35 terms (or in some cases, the 30 most reliable terms).[51][52]In 1958, Tupes and Raymond Christal began a US Air Force study by taking 37 personality factors and other data found in Cattell's 1947 paper, Fiske's 1949 paper, and Tupes' 1957 paper.[53]Through statistical analysis, they derived five factors they labeled "Surgency", "Agreeableness", "Dependability", "Emotional Stability", and "Culture".[54][55]In addition to the influence of Cattell and Fiske's work, they strongly noted the influence of French's 1953 study.[54]Tupes and Christal further tested and explained their 1958 work in a 1961 paper.[56][14] Warren Norman[57]of theUniversity of Michiganreplicated Tupes and Christal's work in 1963. He relabeled "Surgency" as "Extroversion or Surgency", and "Dependability" as "Conscientiousness". He also found four subordinate scales for each factor.[15]Norman's paper was much more read than Tupes and Christal's papers had been. Norman's laterOregon Research InstitutecolleagueLewis Goldbergcontinued this work.[58] In the 4th edition of the 16PF Questionnaire released in 1968, 5 "global factors" derived from the 16 factors were identified: "Extraversion", "Independence", "Anxiety", "Self-control" and "Tough-mindedness".[59]16PF advocates have since called these "the original Big 5".[60] During the 1970s, the changingzeitgeistmade publication of personality research difficult. In his 1968 bookPersonality and Assessment,Walter Mischelasserted that personality instruments could not predict behavior with acorrelationof more than 0.3.Social psychologistslike Mischel argued that attitudes and behavior were not stable, but varied with the situation. Predicting behavior from personality instruments was claimed to be impossible.[by whom?] In 1978,Paul CostaandRobert McCraeof theNational Institutes of Healthpublished a book chapter describing theirNeuroticism-Extroversion-Openness(NEO) model. The model was based on the three factors in its name.[61]They used Eysenck's concept of "Extroversion" rather thanCarl Jung's.[62]Each factor had six facets. The authors expanded their explanation of the model in subsequent papers. Also in 1978, British psychologistPeter SavilleofBrunel Universityapplied statistical analysis to 16PF results, and determined that the model could be reduced to five factors, "Anxiety", "Extraversion", "Warmth", "Imagination" and "Conscientiousness".[63] At a 1980 symposium in Honolulu,Lewis Goldberg,Naomi Takemoto-Chock, Andrew Comrey, and John M. Digman, reviewed the available personality instruments of the day.[64]In 1981, Digman and Takemoto-Chock of theUniversity of Hawaiireanalysed data from Cattell, Tupes, Norman, Fiske and Digman. They re-affirmed the validity of the five factors, naming them "Friendly Compliance vs. Hostile Non-compliance", "Extraversion vs. Introversion", "Ego Strength vs. Emotional Disorganization", "Will to Achieve" and "Intellect". They also found weak evidence for the existence of a sixth factor, "Culture".[65] Peter Saville and his team included the five-factor "Pentagon" model as part of theOccupational Personality Questionnaires(OPQ) in 1984. This was the first commercially available Big Five test.[66]Its factors are "Extroversion", "Vigorous", "Methodical", "Emotional Stability", and "Abstract".[67] This was closely followed by another commercial test, theNEO PIthree-factor personality inventory, published by Costa and McCrae in 1985. It used the three NEO factors. The methodology employed in constructing the NEO instruments has since been subject to critical scrutiny.[68]: 431–33 Emerging methodologies increasingly confirmed personality theories during the 1980s. Though generally failing to predict single instances of behavior, researchers found that they could predict patterns of behavior by aggregating large numbers of observations.[69]As a result, correlations between personality and behavior increased substantially, and it became clear that "personality" did in fact exist.[70] In 1992, the NEO PI evolved into theNEO PI-R, adding the factors "Agreeableness" and "Conscientiousness",[58]and becoming a Big Five instrument. This set the names for the factors that are now most commonly used. The NEO maintainers call their model the "Five Factor Model" (FFM). Each NEO personality dimension has six subordinate facets. Wim Hofstee at theUniversity of Groningenused a lexical hypothesis approach with the Dutch language to develop what became theInternational Personality Item Poolin the 1990s. Further development in Germany and the United States saw the pool based on three languages. Its questions and results have been mapped to various Big Five personality typing models.[71][72] Kibeom Lee and Michael Ashton released a book describing theirHEXACOmodel in 2004.[73]It adds a sixth factor, "Honesty-Humility" to the five (which it calls "Emotionality", "Extraversion", "Agreeableness", "Conscientiousness", and "Openness to Experience"). Each of these factors has four facets. In 2007,Colin DeYoung, Lena C. Quilty andJordan Petersonconcluded that the 10 aspects of the Big Five may have distinct biological substrates.[37]This was derived through factor analyses of two data samples with the International Personality Item Pool, followed by cross-correlation with scores derived from 10 genetic factors identified as underlying the shared variance among the Revised NEO Personality Inventory facets.[74] By 2009, personality and social psychologists generally agreed that both personal and situational variables are needed to account for human behavior.[75] A FFM-associated test was used byCambridge Analytica, and was part of the "psychographic profiling"[76]controversy during the2016 US presidential election.[77][78] Whenfactor analysisis applied topersonality surveydata, semantic associations between aspects of personality and specific terms are often applied to the same person. For example, someone described asconscientiousis more likely to be described as "always prepared" rather than "messy". These associations suggest five broad dimensions used in common language to describe the human personality,temperament, andpsyche.[16][79] Beneath each proposed global factor, there are a number of correlated and more specific primary factors. For example, extraversion is typically associated with qualities such as gregariousness, assertiveness, excitement-seeking, warmth, activity, andpositive emotions.[80]These traits are not black and white; each one is treated as aspectrum.[81] Openness to experienceis a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, open to emotion, sensitive to beauty, and willing to try new things. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are also more likely to hold unconventional beliefs. Open people can be perceived as unpredictable or lacking focus, and more likely to engage in risky behaviour or drug-taking.[82]Moreover, individuals with high openness are said to pursueself-actualisationspecifically byseeking out intense, euphoric experiences. Conversely, those with low openness want to be fulfilled by persevering and are characterised as pragmatic and data-driven – sometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualise the openness factor as there is a lack of biological support for this particular trait. Openness has not shown a significant association with any brain regions as opposed to the other four traits which did when using brain imaging to detect changes in volume associated with each trait.[83] Conscientiousnessis a tendency to beself-disciplined, act dutifully, and strive for achievement against measures or outside expectations. It is related to people's level of impulse control, regulation, and direction. High conscientiousness is often perceived as being stubborn and focused. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability.[85]High conscientiousness indicates a preference for planned rather than spontaneous behaviour.[86] Extraversionis characterised by breadth of activities (as opposed to depth),surgencyfrom external activities/situations, and energy creation from external means.[87]The trait is marked by pronounced engagement with the external world. Extraverts enjoy interacting with people, and are often perceived as energetic. They tend to be enthusiastic and action-oriented. They possess high group visibility, like to talk, and assert themselves. Extraverts may appear more dominant in social settings, as opposed to introverts in that setting.[88] Introverts have lower social engagement and energy levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression, but as greater independence of their social world than extraverts. Introverts need less stimulation and more time alone than extraverts. This does not mean that they are unfriendly or antisocial; rather, they are aloof and reserved in social situations.[89] Generally, people are a combination of extraversion and introversion, with personality psychologistHans Eysencksuggesting a model by which differences in their brains produce these traits.[88]: 106 Agreeablenessis the general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, kind, generous, trusting and trustworthy, helpful, and willing to compromise their interests with others.[89]Agreeable people also have an optimistic view of human nature. Being agreeable helps us cope with stress.[90] Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others' well-being and are less likely to extend themselves for other people. Sometimes their skepticism about others' motives causes them to be suspicious, unfriendly, and uncooperative.[91]Disagreeable people are often competitive or challenging, which can be seen as argumentative or untrustworthy.[85] Because agreeableness is a social trait, research has shown that one's agreeableness positively correlates with the quality of relationships with one's team members. Agreeableness also positively predictstransformational leadershipskills. In a study conducted among 169 participants in leadership positions in a variety of professions, individuals were asked to take a personality test and be directly evaluated by supervised subordinates. Very agreeable leaders were more likely to be considered transformational rather thantransactional. Although the relationship was not strong (r=0.32,β=0.28,p<0.01), it was the strongest of the Big Five traits. However, the same study could not predict leadership effectiveness as evaluated by the leader's direct supervisor.[92] Conversely, agreeableness has been found to be negatively related to transactional leadership in the military. A study of Asian military units showed that agreeable people are more likely to be poor transactional leaders.[93]Therefore, with further research, organisations may be able to determine an individual's potential for performance based on their personality traits. For instance,[94]in their journal article "Which Personality Attributes Are Most Important in the Workplace?" Paul Sackett and Philip Walmsley claim that conscientiousness and agreeableness are "important to success across many different jobs." Neuroticismis the tendency to have strongnegative emotions, such as anger, anxiety, or depression.[95]It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According toHans Eysenck's (1967) theory of personality, neuroticism is associated with low tolerance for stress or a strong dislike of change.[96]Neuroticism is a classic temperament trait that has been studied in temperament research for decades, even before it was adapted by the Five Factor Model.[97]Neurotic people are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening. They can perceive minor frustrations as hopelessly difficult. Theirnegative emotionalreactions tend to stay for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to pessimism toward work, to certainty that work hinders personal relationships, and to higher levels of anxiety from the pressures at work.[98]Furthermore, neurotic people may display moreskin-conductance reactivitythan calm and composed people.[96][99]These problems in emotional regulation can make a neurotic person think less clearly, make worse decisions, and cope less effectively with stress. Being disappointed with one's life achievements can make one more neurotic and increase one's chances of falling into clinical depression. Moreover, neurotic individuals tend to experience more negative life events,[95][100]but neuroticism also changes in response to positive and negative life experiences.[95][100]Also, neurotic people tend to have worse psychological well-being.[101] At the other end of the scale, less neurotic individuals are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings; that is related to extraversion instead.[102] Neuroticism is similar but not identical to being neurotic in the Freudian sense (i.e.,neurosis). Some psychologists[who?]prefer to call neuroticism by the term emotional instability to differentiate it from the term neurotic in a career test. The factors that influence a personality are called the determinants of personality. These factors determine the traits which a person develops in the course of development from a child. There are debates betweentemperamentresearchers andpersonalityresearchers as to whether or not biologically based differences define a concept of temperament or a part of personality. The presence of such differences in pre-cultural individuals (such as animals or young infants) suggests that they belong to temperament since personality is a socio-cultural concept. For this reason developmental psychologists generally interpret individual differences in children as an expression of temperament rather than personality.[103]Some researchers argue that temperaments and personality traits are age-specific demonstrations of virtually the same internal qualities.[104][105]Some believe that early childhood temperaments may become adolescent and adult personality traits as individuals' basic genetic characteristics interact with their changing environments to various degrees.[103][104][106] Researchers of adult temperament point out that, similarly to sex, age, and mental illness, temperament is based on biochemical systems whereas personality is a product of socialisation of an individual possessing these four types of features. Temperament interacts with socio-cultural factors, but, similar to sex and age, still cannot be controlled or easily changed by these factors.[107][108][109][110]Therefore, it is suggested that temperament (neurochemically based individual differences) should be kept as an independent concept for further studies and not be confused with personality (culturally-based individual differences, reflected in the origin of the word "persona" (Lat) as a "social mask").[111][112] Moreover, temperament refers to dynamic features of behaviour (energetic, tempo, sensitivity, and emotionality-related), whereas personality is to be considered a psycho-social construct comprising the content characteristics of human behaviour (such as values, attitudes, habits, preferences, personal history, self-image).[108][109][110]Temperament researchers point out that the lack of attention to surviving temperament research by the creators of the Big Five model led to an overlap between its dimensions and dimensions described in multiple temperament models much earlier. For example, neuroticism reflects the traditional temperament dimension of emotionality studied byJerome Kagan's group since the '60s. Extraversion was also first introduced as a temperament type byJungfrom the '20s.[110][113] A 1996behavioural geneticsstudy of twinssuggested thatheritability(the degree ofvariationin atraitwithin apopulationthat is due togenetic variationin that population) and environmental factors both influence all five factors to the same degree.[114]Among four twin studies examined in 2003, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.[115] The Big Five personality traits have been assessed in some non-human species but methodology is debatable. In one series of studies, human ratings ofchimpanzeesusing theHominoid Personality Questionnaire, revealed factors of extraversion, conscientiousness and agreeableness– as well as an additional factor of dominance–across hundreds of chimpanzees inzoological parks, a large naturalistic sanctuary, and a research laboratory. Neuroticism and openness factors were found in an original zoo sample, but were not replicated in a new zoo sample or in other settings (perhaps reflecting the design of the CPQ).[116]A study review found that markers for the three dimensions extraversion, neuroticism, and agreeableness were found most consistently across different species, followed by openness; only chimpanzees showed markers for conscientious behavior.[117] A study completed in 2020 concluded that dolphins have some similar personality traits to humans. Both are large brained intelligent animals but have evolved separately for millions of years.[118] Research on the Big Five, and personality in general, has focused primarily on individual differences in adulthood, rather than in childhood and adolescence, and often include temperament traits.[103][104][106]Recently, there has been growing recognition of the need to study child and adolescent personality trait development in order to understand how traits develop and change throughout the lifespan.[119] Recent studies have begun to explore the developmental origins and trajectories of the Big Five among children and adolescents, especially those that relate to temperament.[103][104][106]Many researchers have sought to distinguish between personality and temperament.[120]Temperament often refers to early behavioral and affective characteristics that are thought to be driven primarily by genes.[120]Models of temperament often include four trait dimensions: surgency/sociability,negative emotionality, persistence/effortful control, and activity level.[120]Some of these differences in temperament are evident at, if not before, birth.[103][104]For example, both parents and researchers recognize that some newborn infants are peaceful and easily soothed while others are comparatively fussy and hard to calm.[104]Unlike temperament, however, many researchers view the development of personality as gradually occurring throughout childhood.[120]Contrary to some researchers who question whether children have stable personality traits, Big Five or otherwise,[121]most researchers contend that there are significant psychological differences between children that are associated with relatively stable, distinct, and salient behavior patterns.[103][104][106] The structure, manifestations, and development of the Big Five in childhood and adolescence have been studied using a variety of methods, including parent- and teacher-ratings,[122][123][124]preadolescent and adolescent self- and peer-ratings,[125][126][127]and observations of parent-child interactions.[106]Results from these studies support the relative stability of personality traits across the human lifespan, at least from preschool age through adulthood.[104][106][127][128]More specifically, research suggests that four of the Big Five – namely Extraversion, Neuroticism, Conscientiousness, and Agreeableness – reliably describe personality differences in childhood, adolescence, and adulthood.[104][106][127][128]However, some evidence suggests that Openness may not be a fundamental, stable part of childhood personality. Although some researchers have found that Openness in children and adolescents relates to attributes such as creativity, curiosity, imagination, and intellect,[129]many researchers have failed to find distinct individual differences in Openness in childhood and early adolescence.[104][106]Potentially, Openness may (a) manifest in unique, currently unknown ways in childhood or (b) may only manifest as children develop socially and cognitively.[104][106]Other studies have found evidence for all of the Big Five traits in childhood and adolescence as well as two other child-specific traits: Irritability and Activity.[130]Despite these specific differences, the majority of findings suggest that personality traits – particularly Extraversion, Neuroticism, Conscientiousness, and Agreeableness – are evident in childhood and adolescence and are associated with distinct social-emotional patterns of behavior that are largely consistent with adult manifestations of those same personality traits.[104][106][127][128]Some researchers have proposed the youth personality trait is best described by six trait dimensions: neuroticism, extraversion, openness to experience, agreeableness, conscientiousness, and activity.[131]Despite some preliminary evidence for this "Little Six" model,[120][131]research in this area has been delayed by a lack of available measures. Previous research has found evidence that most adults become more agreeable and conscientious and less neurotic as they age.[132]This has been referred to as thematurationeffect.[105]Many researchers have sought to investigate how trends in adult personality development compare to trends in youth personality development.[131]Two main population-level indices have been important in this area of research: rank-order consistency and mean-level consistency. Rank-order consistency indicates the relative placement of individuals within a group.[133]Mean-level consistency indicates whether groups increase or decrease on certain traits throughout the lifetime.[132] Findings from these studies indicate that, consistent with adult personality trends, youth personality becomes increasingly more stable in terms of rank-order throughout childhood.[131]Unlike adult personality research, which indicates that people become agreeable, conscientious, and emotionally stable with age,[132]some findings in youth personality research have indicated that mean levels of agreeableness, conscientiousness, and openness to experience decline from late childhood to late adolescence.[131]The disruption hypothesis, which proposes that biological, social, and psychological changes experienced during youth result in temporary dips in maturity, has been proposed to explain these findings.[120][131] In Big Five studies, extraversion has been associated withsurgency.[103]Children with high Extraversion are energetic, talkative, social, and dominant with children and adults, whereas children with low extraversion tend to be quiet, calm, inhibited, and submissive to other children and adults.[104]Individual differences in extraversion first manifest in infancy as varying levels of positive emotionality.[134]These differences in turn predict social and physical activity during later childhood and may represent, or be associated with, thebehavioral activation system.[103][104]In children, Extraversion/Positive Emotionality includes four sub-traits: three of these (activity,sociability, andshyness) are similar to the previously described traits of temperament;[135][97]the other isdominance. Many studies oflongitudinaldata, which correlate people's test scores over time, andcross-sectionaldata, which compare personality levels across different age groups, show a high degree of stability in personality traits during adulthood, especially Neuroticism that is often regarded as a temperament trait[147]similarly to longitudinal research in temperament for the same traits.[97]It is shown that the personality stabilizes for working-age individuals within about four years after starting working. There is also little evidence that adverse life events can have any significant impact on the personality of individuals.[148]More recent research and meta-analyses of previous studies, however, indicate thatchange occursin all five traits at various points in the lifespan. The new research shows evidence for amaturationeffect. On average, levels of agreeableness and conscientiousness typically increase with time, whereas extraversion, neuroticism, and openness tend to decrease.[149]Research has also demonstrated that changes in Big Five personality traits depend on the individual's current stage of development. For example, levels of agreeableness and conscientiousness demonstrate a negative trend during childhood and early adolescence before trending upwards during late adolescence and into adulthood.[119]In addition to these group effects, there are individual differences: different people demonstrate unique patterns of change at all stages of life.[150] In addition, some research (Fleeson, 2001) suggests that the Big Five should not be conceived of as dichotomies (such as extraversion vs. introversion) but as continua. Each individual has the capacity to move along each dimension as circumstances (social or temporal) change. Someone is therefore not simply on one end of each trait dichotomy but is a blend of both, exhibiting some characteristics more often than others:[151] Research regarding personality with growing age has suggested that as individuals enter their elder years (79–86), those with lower IQ see a raise in extraversion, but a decline in conscientiousness and physical well-being.[152] Some cross-cultural research has shown some patterns of gender differences on responses to the NEO-PI-R and the Big Five Inventory.[153][154]For example, women consistently report higher Neuroticism, Agreeableness, warmth (an extraversion facet) and openness to feelings, and men often report higher assertiveness (a facet of extraversion) and openness to ideas as assessed by the NEO-PI-R.[155] A study of gender differences in 55 nations using the Big Five Inventory found that women tended to be somewhat higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. The difference in neuroticism was the most prominent and consistent, with significant differences found in 49 of the 55 nations surveyed.[156] Gender differences in personality traits are largest in prosperous, healthy, and more gender-egalitarian nations. The explanation for this, as stated by the researchers of a 2001 paper, is that actions by women in individualistic, egalitarian countries are more likely to be attributed to their personality, rather than being attributed to ascribed gender roles within collectivist, traditional countries.[155] Measured differences in the magnitude of sex differences between more or less developed world regions were caused by the changes in the measured personalities of men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, less extraverted, less conscientious and less agreeable compared to men in less developed world regions. Women, on the other hand tended not to differ in personality traits across regions.[156] Frank Sullowayargues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to siblings that were born later. Large-scale studies using random samples and self-report personality tests, however, have found milder effects than Sulloway claimed, or no significant effects of birth order on personality.[157][158]A study using theProject Talentdata, which is a large-scale representative survey of American high school students, with 272,003 eligible participants, found statistically significant but very small effects (the average absolute correlation between birth order and personality was .02) of birth order on personality, such that firstborns were slightly more conscientious, dominant, and agreeable, while also being less neurotic and less sociable.[159]Parental socioeconomic status and participant gender had much larger correlations with personality. In 2002, the Journal of Psychology posted a Big Five Personality Trait Difference; where researchers explored the relationship between the five-factor model and the Universal-Diverse Orientation (UDO) in counselor trainees. (Thompson, R., Brossart, D., and Mivielle, A., 2002). UDO is known as one social attitude that produces a strong awareness and/or acceptance towards the similarities and differences among individuals. (Miville, M., Romas, J., Johnson, J., and Lon, R. 2002) The study found that the counselor trainees that are more open to the idea of creative expression (a facet of Openness to Experience, Openness to Aesthetics) among individuals are more likely to work with a diverse group of clients, and feel comfortable in their role.[160] Individual differences in personality traits are widely understood to be conditioned by cultural context.[88]: 189 Research into the Big Five has been pursued in a variety of languages and cultures, such as German,[161]Chinese,[162]and South Asian.[163][164]For example, Thompson has claimed to find the Big Five structure across several cultures using an international English language scale.[165]Cheung, van de Vijver, and Leong (2011) suggest, however, that the Openness factor is particularly unsupported in Asian countries and that a different fifth factor is identified.[166] Sopagna Eap et al. (2008) found that European-American men scored higher than Asian-American men on extroversion, conscientiousness, and openness, while Asian-American men scored higher than European-American men on neuroticism.[167]Benet-Martínez and Karakitapoglu-Aygün (2003) arrived at similar results.[168] Recent work has found relationships betweenGeert Hofstede'scultural factors, Individualism, Power Distance, Masculinity, and Uncertainty Avoidance, with the average Big Five scores in a country.[169]For instance, the degree to which a country values individualism correlates with its average extraversion, whereas people living in cultures which are accepting of large inequalities in their power structures tend to score somewhat higher on conscientiousness.[170][171] A 2017 study has found that countries' average personality trait levels are correlated with their political systems. Countries with higher average trait Openness tended to have more democratic institutions, an association that held even after factoring out other relevant influences such as economic development.[172] Attempts to replicate the Big Five have succeeded in some countries but not in others. Some research suggests, for instance, that Hungarians do not have a single agreeableness factor.[173]Other researchers have found evidence for agreeableness but not for other factors.[174] Some diseases cause changes in personality. For example, although gradual memory impairment is the hallmark feature ofAlzheimer's disease, a systematic review of personality changes in Alzheimer's disease by Robins Wahlin and Byrne, published in 2011, found systematic and consistent trait changes mapped to the Big Five. The largest change observed was a decrease in conscientiousness. The next most significant changes were an increase in Neuroticism and decrease in Extraversion, but Openness and Agreeableness were also decreased. These changes in personality could assist with early diagnosis.[175] A study published in 2023 found that the Big Five personality traits may also influence the quality of life experienced by people with Alzheimer's disease and other dementias, post diagnosis. In this study people with dementia with lower levels of Neuroticism self-reported higher quality of life than those with higher levels of Neuroticism while those with higher levels of the other four traits self-reported higher quality of life than those with lower levels of these traits. This suggests that as well as assisting with early diagnosis, the Big Five personality traits could help identify people with dementia potentially more vulnerable to adverse outcomes and inform personalized care planning and interventions.[176] As of 2002[update], there were over fifty published studies relating the FFM to personality disorders.[177]Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains.[178] In her review of the personality disorder literature published in 2007,Lee Anna Clarkasserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits".[179]However, other researchers disagree that this model is widely accepted (see the section Critique below) and suggest that it simply replicates early temperament research.[110][180]Noticeably, FFM publications never compare their findings to temperament models even though temperament andmental disorders(especially personality disorders) are thought to be based on the sameneurotransmitterimbalances, just to varying degrees.[110][181][182][183] The five-factor model was claimed to significantly predict all ten personality disorder symptoms and outperform theMinnesota Multiphasic Personality Inventory(MMPI) in the prediction ofborderline,avoidant, anddependentpersonality disorder symptoms.[184]However, most predictions related to an increase in Neuroticism and a decrease in Agreeableness, and therefore did not differentiate between the disorders very well.[185] Converging evidence from several nationally representative studies has established three classes of mental disorders which are especially common in the general population: Depressive disorders (e.g.,major depressive disorder(MDD),dysthymic disorder),[187]anxiety disorders (e.g.,generalized anxiety disorder(GAD),post-traumatic stress disorder(PTSD),panic disorder,agoraphobia,specific phobia, andsocial phobia),[187]and substance use disorders (SUDs).[188][189]The Five Factor personality profiles of users of different drugs may be different.[190]For example, the typical profile for heroin users isN⇑,O⇑,A⇓,C⇓{\displaystyle {\rm {N}}\Uparrow ,{\rm {O}}\Uparrow ,{\rm {A}}\Downarrow ,{\rm {C}}\Downarrow }, whereas for ecstasy users the high level of N is not expected but E is higher:E⇑,O⇑,A⇓,C⇓{\displaystyle {\rm {E}}\Uparrow ,{\rm {O}}\Uparrow ,{\rm {A}}\Downarrow ,{\rm {C}}\Downarrow }.[190] These common mental disorders (CMDs) have been empirically linked to the Big Five personality traits, neuroticism in particular. Numerous studies have found that having high scores of neuroticism significantly increases one's risk for developing a common mental disorder.[191][192]A large-scale meta-analysis (n > 75,000) examining the relationship between all of the Big Five personality traits and common mental disorders found that low conscientiousness yielded consistently strong effects for each common mental disorder examined (i.e., MDD, dysthymic disorder, GAD, PTSD, panic disorder, agoraphobia, social phobia, specific phobia, and SUD).[193]This finding parallels research on physical health, which has established that conscientiousness is the strongest personality predictor of reduced mortality, and is highly negatively correlated with making poor health choices.[194][195]In regards to the other personality domains, the meta-analysis found that all common mental disorders examined were defined by high neuroticism, most exhibited low extraversion, only SUD was linked to agreeableness (negatively), and no disorders were associated with Openness.[193]A meta-analysis of 59 longitudinal studies showed that high neuroticism predicted the development of anxiety, depression, substance abuse, psychosis, schizophrenia, and non-specific mental distress, also after adjustment for baseline symptoms and psychiatric history.[196] Five major models have been posed to explain the nature of the relationship between personality and mental illness. There is currently no single "best model", as each of them has received at least some empirical support. These models are not mutually exclusive – more than one may be operating for a particular individual and various mental disorders may be explained by different models.[196][197] To examine how the Big Five personality traits are related to subjective health outcomes (positive and negative mood, physical symptoms, and general health concern) and objective health conditions (chronic illness, serious illness, and physical injuries), Jasna Hudek-Knezevic and Igor Kardum conducted a study from a sample of 822 healthy volunteers (438 women and 384 men).[201]Out of the Big Five personality traits, they found neuroticism most related to worse subjective health outcomes and optimistic control to better subjective health outcomes. When relating to objective health conditions, connections drawn were presented weak, except that neuroticism significantly predicted chronic illness, whereas optimistic control was more closely related to physical injuries caused by accident.[201] Being highlyconscientiousmay add as much as five years to one's life.[vague][195]The Big Five personality traits also predict positive health outcomes.[202]In an elderly Japanese sample, conscientiousness,extraversion, andopennesswere related to lower risk of mortality.[203] Higher conscientiousness is associated with lower obesity risk. In already obese individuals, higher conscientiousness is associated with a higher likelihood of becoming non-obese over a five-year period.[204] Personality plays an important role in academic achievement. A study of 308 undergraduates who completed the Five Factor Inventory Processes and reported theirGPAsuggested that conscientiousness and agreeableness have a positive relationship with all types of learning styles (synthesis-analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism shows an inverse relationship. Moreover, extraversion and openness were proportional to elaborative processing. The Big Five personality traits accounted for 14% of the variance in GPA, suggesting that personality traits make some contributions to academic performance. Furthermore, reflective learning styles (synthesis-analysis and elaborative processing) were able to mediate the relationship between openness and GPA. These results indicate that intellectual curiosity significantly enhances academic performance if students combine their scholarly interest with thoughtful information processing.[205] A recent study of Israeli high-school students found that those in the gifted program systematically scored higher onopennessand lower onneuroticismthan those not in the gifted program. While not a measure of the Big Five, gifted students also reported less state anxiety than students not in the gifted program.[206]Specific Big Five personality traits predictlearning stylesin addition to academic success. Studies conducted on college students have concluded that hope, which is linked to agreeableness,[207]conscientiousness, neuroticism, and openness,[207]has a positive effect on psychological well-being. Individuals high in neurotic tendencies are less likely to display hopeful tendencies and are negatively associated with well-being.[208]Personality can sometimes be flexible and measuring the big five personality for individuals as they enter certain stages of life may predict their educational identity. Recent studies have suggested the likelihood of an individual's personality affecting their educational identity.[209] Learning styleshave been described as "enduring ways of thinking and processing information".[205] In 2008, theAssociation for Psychological Science(APS) commissioned a report that concludes that no significant evidence exists that learning-style assessments should be included in the education system.[210]Thus it is premature, at best, to conclude that the evidence links the Big Five to "learning styles", or "learning styles" to learning itself. However, the APS report also suggested that all existing learning styles have not been exhausted and that there could exist learning styles worthy of being included in educational practices. There are studies that conclude that personality and thinking styles may be intertwined in ways that link thinking styles to the Big Five personality traits.[211]There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals. As one example, Schmeck, Ribich, and Ramanaiah (1997) defined four types of learning styles:[212] When all four facets are implicated within the classroom, they will each likely improve academic achievement.[205]By identifying learning strategies in individuals, learning and academic achievement can be improved, and a deeper understanding of information processing can be gained.[213]This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often found to be more conscientious, intellectually open, and extraverted than shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing.[205]The main functions of these four specific learning styles are as follows: Openness has been linked to learning styles that often lead to academic success and higher grades like synthesis analysis and methodical study. Because conscientiousness and openness have been shown to predict all four learning styles, it suggests that individuals who possess characteristics like discipline, determination, and curiosity are more likely to engage in all of the above learning styles.[205] According to the research carried out by Komarraju, Karau, Schmeck & Avdic (2011), conscientiousness and agreeableness are positively related with all four learning styles, whereas neuroticism was negatively related with those four. Furthermore, extraversion and openness were only positively related to elaborative processing, and openness itself correlated with higher academic achievement.[205] In addition, a previous study by psychologist Mikael Jensen has shown relationships between the Big Five personality traits, learning, and academic achievement. According to Jensen, all personality traits, except neuroticism, are associated with learning goals and motivation. Openness and conscientiousness influence individuals to learn to a high degree unrecognized, while extraversion and agreeableness have similar effects.[214]Conscientiousness and neuroticism also influence individuals to perform well in front of others for a sense of credit and reward, while agreeableness forces individuals to avoid this strategy of learning.[214]Jensen's study concludes that individuals who score high on the agreeableness trait will likely learn just to perform well in front of others.[214] Besides openness, all Big Five personality traits helped predict the educational identity of students. Based on these findings, scientists are beginning to see that the Big Five traits might have a large influence of on academic motivation that leads to predicting a student's academic performance.[209] Some authors suggested that Big Five personality traits combined with learning styles can help predict some variations in the academic performance and the academic motivation of an individual which can then influence their academic achievements.[215]This may be seen because individual differences in personality represent stable approaches to information processing. For instance, conscientiousness has consistently emerged as a stable predictor of success in exam performance, largely because conscientious students experience fewer study delays.[209]Conscientiousness shows a positive association with the four learning styles because students with high levels of conscientiousness develop focused learning strategies and appear to be more disciplined and achievement-oriented. Personality and learning styles are both likely to play significant roles in influencing academic achievement. College students (308 undergraduates) completed the Five Factor Inventory and the Inventory of Learning Processes and reported their grade point average. Two of the Big Five traits, conscientiousness and agreeableness, were positively related with all four learning styles (synthesis analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism was negatively related with all four learning styles. In addition, extraversion and openness were positively related with elaborative processing. The Big Five together explained 14% of the variance in grade point average (GPA), and learning styles explained an additional 3%, suggesting that both personality traits and learning styles contribute to academic performance. Further, the relationship between openness and GPA was mediated by reflective learning styles (synthesis-analysis and elaborative processing). These latter results suggest that being intellectually curious fully enhances academic performance when students combine this scholarly interest with thoughtful information processing. Implications of these results are discussed in the context of teaching techniques and curriculum design. When the relationship between the five-factor personality traits and academic achievement in distance education settings was examined in brief, the openness personality trait was found to be the most important variable that has a positive relationship with academic achievement in distance education environments. In addition, it was found that self-discipline, extraversion, and adaptability personality traits are generally in a positive relationship with academic achievement. The most important personality trait that has a negative relationship with academic achievement has emerged as neuroticism. The results generally show that individuals who are organized, planned, determined, who are oriented to new ideas and independent thinking have increased success in distance education environments. On the other hand, it can be said that individuals with anxiety and stress tendencies generally have lower academic success.[216][217][218] Researchers have long suggested that work is more likely to be fulfilling to the individual and beneficial to society when there is alignment between the person and their occupation.[219]For instance, software programmers and scientists often rank high on Openness to experience and tend to be intellectually curious, think in symbols and abstractions, and find repetition boring.[220]Psychologists and sociologists rank higher on Agreeableness and Openness than economists and jurists.[221] It is believed that the Big Five traits are predictors of future performance outcomes to varying degrees. Specific facets of the Big Five traits are also thought to be indicators of success in the workplace, and each individual facet can give a more precise indication as to the nature of a person. Different traits' facets are needed for different occupations. Various facets of the Big Five traits can predict the success of people in different environments. The estimated levels of an individual's success in jobs that require public speaking versus one-on-one interactions will differ according to whether that person has particular traits' facets.[36] Job outcome measures include job and training proficiency and personnel data.[222]However, research demonstrating such prediction has been criticized, in part because of the apparently low correlation coefficients characterizing the relationship between personality andjob performance. In a 2007 article states: "The problem with personality tests is ... that the validity of personality measures as predictors of job performance is often disappointingly low. The argument for using personality tests to predict performance does not strike me as convincing in the first place."[223] Such criticisms were put forward byWalter Mischel,[224]whose publication caused a two-decades' long crisis in personality psychometrics. However, later work demonstrated that the correlations obtained by psychometric personality researchers were actually very respectable by comparative standards,[225]and that the economic value of even incremental increases in prediction accuracy was exceptionally large, given the vast difference in performance by those who occupy complex job positions.[226] Research has suggested that individuals who are considered leaders typically exhibit lower amounts of neurotic traits, maintain higher levels of openness, balanced levels of conscientiousness, and balanced levels of extraversion.[227][228][229]Further studies have linked professional burnout to neuroticism, and extraversion to enduring positive work experience.[230]Studies have linked national innovation, leadership, and ideation to openness to experience and conscientiousness.[231]Occupationalself-efficacyhas also been shown to be positively correlated with conscientiousness and negatively correlated with neuroticism.[228]Some research has also suggested that the conscientiousness of a supervisor is positively associated with an employee's perception of abusive supervision.[232]Others have suggested that low agreeableness and high neuroticism are traits more related to abusive supervision.[233] Opennessis positively related to proactivity at the individual and the organizational levels and is negatively related to team and organizational proficiency. These effects were found to be completely independent of one another. This is also counter-conscientious and has a negative correlation to Conscientiousness.[234] Agreeablenessis negatively related to individual task proactivity. Typically this is associated with lower career success and being less able to cope with conflict. However there are benefits to the Agreeableness personality trait including higher subjective well-being; more positive interpersonal interactions and helping behavior; lower conflict; lower deviance and turnover.[234]Furthermore, attributes related to Agreeableness are important for workforce readiness for a variety of occupations and performance criteria.[94]Research has suggested that those who are high in agreeableness are not as successful in accumulating income.[235] Extraversionresults in greater leadership emergence and effectiveness; as well as higher job and life satisfaction. However extraversion can lead to more impulsive behaviors, more accidents and lower performance in certain jobs.[234] Conscientiousnessis highly predictive of job performance in general,[94]and is positively related to all forms of work role performance, including job performance and job satisfaction, greater leadership effectiveness, lower turnover and deviant behaviors. However this personality trait is associated with reduced adaptability, lower learning in initial stages of skill acquisition and more interpersonally abrasiveness, when also low in agreeableness.[234]It is also not the case that more or extreme conscientiousness is always necessarily better as there does appear to be a link between conscientiousness and obsessive-compulsive personality disorder (OCPD). Selecting employees for a moderate level of conscientiousness may actually provide the best occupational outcome.[236] Neuroticismis negatively related to all forms of work role performance. This increases the chance of engaging in risky behaviors.[237][234] Two theories have been integrated in an attempt to account for these differences in work role performance.Trait activation theoryposits that within a person trait levels predict future behavior, that trait levels differ between people, and that work-related cues activate traits which leads towork relevant behaviors. Role theory suggests that role senders provide cues to elicit desired behaviors. In this context, role senders provide workers with cues for expected behaviors, which in turn activates personality traits and work relevant behaviors. In essence, expectations of the role sender lead to different behavioral outcomes depending on the trait levels of individual workers, and because people differ in trait levels, responses to these cues will not be universal.[237] As of 2020, remote work has become more and more prevalent as brought on by the COVID-19 pandemic. However, research has shown that the Big Five personality traits still influence remote work. Gavoille and Hazans have found that conscientiousness (β=0.06) and openness to experience are both positively correlated with willingness to work and worker productivity within a remote setting, with openness to experience being less significant (β=0.021). This is then contrasted with extraversion (β=-0.038), which negatively correlates with Willingness to work and openness. Another conclusion that was found is that gender did not play a role in the difference between conscientiousness and extraversion, and willingness to work from home.[238]Similarly, Wright investigated the influence of Big Five on the soft skills in the remote workplace, such as effort and cooperation. She delineated soft skills into two different groups, Task Performance and Contextual Performance, with each having three subgroups. Task Performance was more aligned with specific job responsibilities and handling cognitive tasks associated with their job, and the three subgroups were Job Knowledge, Organizational Skills, and Efficiency. Wright found that Job Knowledge did not correlate with any Big Five traits, Organizational Skill is only significantly correlated with Conscientiousness (T=7.952, P=.001), and Efficiency is significantly correlated with Conscientiousness (T=3.8, P=.001), and Neuroticism(T=-2.6, P=.008), which it is a negative correlation. Contextual Performance is concerned with non-job core requirements, such as perceived effort and job cooperation, with the subgroups being Persistent Effort, Cooperation, and Organizational Conscientiousness. Wright found that Persistent Effort is positively correlated with Openness(t=2.4, P=.014) and Conscientiousness (T=3.1, P=.002), and negatively correlated with Neuroticism (T=-3.2, P=.001). Cooperation was positively correlated with Extraversion (t=2.6, P=.009) and Conscientiousness (t=2.82, P=.005), as well as Organizational Conscientiousness was positively correlated with Agreeableness (t=4.059, P<.001) and Conscientiousness (t=4.511, P<.001)[239] On another tack, scientists wanted to discover if the Big Five has any effect on remote worker burnout, and the effect that different Big Five traits have on worker health and engagement. Olsen et al found that when remote work days are increased, individuals high in extraversion start to struggle with work engagement (β=-.094, P<.03), and individuals with higher neuroticism are more likely to have poorer health (p= -.23), work engagement (p=-.18), and an increase in sick leaves(p=.38).[240]However, Olsen found that conscientiousness, coupled with an increase in remote work days, can lead to a decrease in general health, contrary to all of the benefits it has listed above. Similarly, Para et al. found that individuals with higher Neuroticism (β=.138, p<.05) also tend to have higher Remote Work Exhaustion (RWE). They also found that conscientiousness(β=-.336, p<.001) and agreeableness (β=-.267, p<.001) were negatively correlated with RWE, meaning that they were more resilient against RWE over large spans of remote work days.[241]The author attributed conscientious individuals to being hard workers and dependable, while agreeableness was attributed to the situation the study was completed under, which was the at-home quarantine due to COVID-19, stating individuals with high agreeableness did well with the forced contact due to quarantine, which transferred over to their work. Various researchers have explored the association of Big Five and romantic relationships in terms of relationship satisfaction.[242][243][244]A meta-analysis showed that there was a higher level of marital satisfaction if their spouse showed lower levels in neuroticism (.22), but higher levels in agreeableness (.15) and conscientiousness(.12). There was only a weak correlation, but it was the same level of satisfaction for both genders. Much like the previous meta-analysis, a study on self-reported big five traits showed that those with higher levels of agreeableness, emotional stability, conscientiousness, and extraversion had higher levels of marital satisfaction(.20). That same study found that there was little to no difference in marital satisfaction if the two partners had similar or different levels of trait personality.[245] O’Brien and colleagues[246]examined the association of Big Five and romantic relationships by investigating participants’ commitment levels. The three levels of commitment are affective commitment (emotional attachment), continuance commitment (financial considerations), and normative commitment (the ethical and moral responsibilities). The commitment levels were based on the taxonomy of organizational commitment[247]and the conceptual model of marital commitment of Johnson[248]and Johnson et al.[249]122 Individuals currently in a committed relationship responded to a 50-item personality questionnaire from the International Personality Item Pool (IPIP, 2006), and a questionnaire on commitment modified from Allen.[247]The key findings showed that participants high in Extraversion reported high levels of affective commitment; participants high on Extraversion were higher on Openness to Experience and affective commitment. Conscientiousness demonstrated a negative relationship with continuance commitment. While Extraversion and Agreeableness exhibited a positive correlation with each other, no significant relationships were found between Agreeableness and any of the commitment measures. The findings indicated gender differences in that women with lower levels of Openness to Experience were often paired with partners who scored higher in Extraversion.  Men who exhibited strong affective commitment were more likely to be in relationships with women high in Conscientiousness. Additionally, women whose partners showed high affective commitment tended to be higher in both Conscientiousness and Emotional Stability. Asselmann and Sprecht[250]examined the association of Big Five (BFI-S) and romantic relationships through major life events across years in 2005, 2009, 2013, and 2017 with a sample of 49,932 participants in Germany. Those major life events are (1) moving in with a partner, (2) getting married, (3) getting separated, and (4) getting divorced. Researchers also examined whether the Big Five personality traits play a significant role in romantic relationships. Along the spectrum of a person’s life satisfaction, marital satisfaction (one of romantic relationships) is shown to be stronger than job satisfaction, health satisfaction, and social satisfaction.[251]The key findings from Asselmann and Sprecht showed that more extraverted individuals were more likely to move in with a partner. Less agreeable and less emotionally stable women were more likely to move in with a partner. Men were more extraverted in the years before moving in and became gradually more open and more conscientious after moving in. Less agreeable men were more likely to get married. Individuals who got married became less open in the first three years after the marriage. Women became more extraverted after being separated. Men with lower emotional stability and women who were both less emotionally stable and more extraverted were more prone to experiencing relationship breakups. Individuals who got divorced were less agreeable in the years before the divorce. Personality may change after specific events. For example, both men and women who experienced separation or divorce became less emotionally stable in the following years. The results implicated that total agreeableness was not a guarantee for long-lasting romantic relationships, as less agreeable individuals were more likely to experience both positive and negative major romantic events.[250]Getting into a long-term romantic relationship can kick-start personality development in young adults ages 20–30 as they are faced with new social situations and expectations. For instance, high levels of trait neuroticism at the beginning of relationships can be seen decreasing over 8 years once the relationship has begun, as well as other Big Five personality traits, such as Conscientiousness and Agreeableness, can be seen increasing in long-term relationships.[252] The Big Five Personality Model also has applications in the study of political psychology. Studies have been finding links between the big five personality traits and political identification. It has been found by several studies that individuals who score high in Conscientiousness are more likely to possess aright-wing political identification.[253][254][255]On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and aleft-leaning ideology.[253][256][257]While the traits of agreeableness, extraversion, and neuroticism have not been consistently linked to either conservative or liberal ideology, with studies producing mixed results, such traits are promising when analyzing the strength of an individual's party identification.[256][257]However, correlations between the Big Five and political beliefs, while present, tend to be small, with one study finding correlations ranged from 0.14 to 0.24.[258] The predictive effects of the Big Five personality traits relate mostly to social functioning and rules-driven behavior and are not very specific for prediction of particular aspects of behavior. For example, it was noted by all temperament researchers that high neuroticism precedes the development of all common mental disorders[196]and is not associated with personality.[111]Further evidence is required to fully uncover the nature and differences between personality traits, temperament and life outcomes. Social and contextual parameters also play a role in outcomes and the interaction between the two is not yet fully understood.[259] Though the effect sizes are small: Of the Big Five personality traits high Agreeableness, Conscientiousness and Extraversion relate to general religiosity, while Openness relate negatively toreligious fundamentalismand positively tospirituality. High Neuroticism may be related to extrinsic religiosity, whereas intrinsic religiosity and spirituality reflect Emotional Stability.[260] Several measures of the Big Five exist: The most frequently used measures of the Big Five comprise either items that are self-descriptive sentences[174]or, in the case of lexical measures, items that are single adjectives.[2]Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balancedInternational English Big-Five Mini-Markers[165]or a very brief (10 item) measure of the Big Five domains.[262]Research has suggested that some methodologies in administering personality tests are inadequate in length and provide insufficient detail to truly evaluate personality. Usually, longer, more detailed questions will give a more accurate portrayal of personality.[265]At the same time, shorter questionnaires may be sufficient to get a reasonable estimate of Big Five personality scores when questions are carefully selected and statistical imputation is used.[266]The five factor structure has been replicated in peer reports.[267]However, many of the substantive findings rely on self-reports. Much of the evidence on the measures of the Big 5 relies on self-report questionnaires, which makes self-report bias and falsification of responses difficult to deal with and account for.[263]It has been argued that the Big Five tests do not create an accurate personality profile because the responses given on these tests are not true in all cases and can be falsified.[268]For example, questionnaires are answered by potential employees who might choose answers that paint them in the best light.[269] Research suggests that a relative-scored Big Five measure in which respondents had to make repeated choices between equally desirable personality descriptors may be a potential alternative to traditional Big Five measures in accurately assessing personality traits, especially when lying or biased responding is present.[264]When compared with a traditional Big Five measure for its ability to predict GPA and creative achievement under both normal and "fake good"-bias response conditions, the relative-scored measure significantly and consistently predicted these outcomes under both conditions; however, theLikertquestionnaire lost its predictive ability in the faking condition. Thus, the relative-scored measure proved to be less affected by biased responding than the Likert measure of the Big Five. Andrew H. Schwartz analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age.[270] The proposed Big Five model has been subjected to considerable critical scrutiny in a number of published studies.[271][272][273][274][275][276][68][277][111]One prominent critic of the model has beenJack Blockat theUniversity of California, Berkeley. In response to Block, the model was defended in a paper published by Costa and McCrae.[278]This was followed by a number of published critical replies from Block.[279][280][8] It has been argued that there are limitations to the scope of the Big Five model as an explanatory or predictive theory.[68][277]It has also been argued that measures of the Big Five account for only 56% of the normal personality trait sphere alone (not even considering the abnormal personality trait sphere).[68]Also, the static Big Five[281]is not theory driven, it is merely a statistically driven investigation of certain descriptors that tend to cluster together often based on less-than-optimal factor analytic procedures.[68]: 431–33[111]Measures of the Big Five constructs appear to show some consistency in interviews, self-descriptions and observations, and this static five-factor structure seems to be found across a wide range of participants of different ages and cultures.[282]However, whilegenotypictemperament trait dimensions might appear across different cultures, thephenotypicexpression of personality traits differs profoundly across different cultures as a function of the different socio-cultural conditioning and experiential learning that takes place within different cultural settings.[283] Moreover, the fact that the Big Five model was based onlexical hypothesis(i.e. on the verbal descriptors of individual differences) indicated strong methodological flaws in this model, especially related to its main factors, Extraversion and Neuroticism. First, there is a natural pro-social bias of language in people's verbal evaluations. After all, language is an invention of group dynamics that was developed to facilitate socialization and the exchange of information and to synchronize group activity. This social function of language therefore creates a sociability bias in verbal descriptors of human behavior: there are more words related to social than physical or even mental aspects of behavior. The sheer number of such descriptors will cause them to group into the largest factor in any language, and such grouping has nothing to do with the way that core systems of individual differences are set up. Second, there is also a negativity bias in emotionality (i.e. most emotions have negative affectivity), and there are more words in language to describe negative rather than positive emotions. Such asymmetry in emotional valence creates another bias in language. Experiments using the lexical hypothesis approach indeed demonstrated that the use of lexical material skews the resulting dimensionality according to a sociability bias of language and a negativity bias of emotionality, grouping all evaluations around these two dimensions.[275]This means that the two largest dimensions in the Big Five model might be just an artifact of the lexical approach that this model employed. One common criticism is that the Big Five does not explain all of human personality. Some psychologists have dissented from the model precisely because they feel it neglects other domains of personality, such asreligiosity,manipulativeness/machiavellianism,honesty, sexiness/seductiveness,thriftiness,conservativeness,masculinity/femininity,snobbishness/egotism,sense of humour, andrisk-taking/thrill-seeking.[276][284]Dan P. McAdamshas called the Big Five a "psychology of the stranger", because they refer to traits that are relatively easy to observe in a stranger; other aspects of personality that are more privately held or more context-dependent are excluded from the Big Five.[285]Block has pointed to several less-recognized but successful efforts to specify aspects of character not subsumed by the model.[8] There may be debate as to what counts as personality and what does not and the nature of the questions in the survey greatly influence outcome. Multiple particularly broad question databases have failed to produce the Big Five as the top five traits.[286] In many studies, the five factors are not fullyorthogonalto one another; that is, the five factors are not independent.[287][288]Orthogonality is viewed as desirable by some researchers because it minimizes redundancy between the dimensions. This is particularly important when the goal of a study is to provide a comprehensive description of personality with as few variables as possible. The model is inappropriate for studyingearly childhood, as language is not yet developed.[8] Factor analysis, the statistical method used to identify the dimensional structure of observed variables, lacks a universally recognized basis for choosing among solutions with different numbers of factors.[3]A five factor solution depends on some degree of interpretation by the analyst. A larger number of factors may underlie these five factors. This has led to disputes about the "true" number of factors. Big Five proponents have responded that although other solutions may be viable in a single data set, only the five-factor structure consistently replicates across different studies.[289]Block argues that the use of factor analysis as the exclusive paradigm for conceptualizing personality is too limited.[8] Surveys in studies are often online surveys of college students (compareWEIRD bias). Results do not always replicate when run on other populations or in other languages.[290]It is not clear that different surveys measure the same 5 factors.[8] Moreover, the factor analysis that this model is based on is a linear method incapable of capturing nonlinear, feedback and contingent relationships between core systems of individual differences.[275]
https://en.wikipedia.org/wiki/Five_Factor_Model
Theconventional wisdomorreceived opinionis the body of ideas or explanations generally accepted by the public and/or by experts in a field.[1] The term "conventional wisdom" dates back to at least 1838, as a synonym for "commonplace knowledge".[2][n 1]It was used in a number of works, occasionally in a benign[3]or neutral[4]sense, but more often pejoratively.[5]Despite this previous usage, the term is often credited to the economistJohn Kenneth Galbraith, who used it in his 1958 bookThe Affluent Society:[6] It will be convenient to have a name for the ideas which are esteemed at any time for their acceptability, and it should be a term that emphasizes this predictability. I shall refer to these ideas henceforth as the conventional wisdom.[7] Galbraith specifically prepended "The" to the phrase to emphasize its uniqueness, and sharpened its meaning to narrow it to those commonplace beliefs that are also acceptable and comfortable to society, thus enhancing their ability to resist facts that might diminish them.[citation needed]He repeatedly referred to it throughout the text ofThe Affluent Society, invoking it to explain the high degree of resistance in academic economics to new ideas. For these reasons, he is usually credited with the invention and popularization of the phrase in modern usage.[citation needed]
https://en.wikipedia.org/wiki/Conventional_wisdom
Application security(shortAppSec) includes all tasks that introduce a securesoftware development life cycleto development teams. Its final goal is to improve security practices and, through that, to find, fix and preferably prevent security issues within applications. It encompasses the whole application life cycle from requirements analysis, design, implementation, verification as well as maintenance.[1] Web application securityis a branch ofinformation securitythat deals specifically with the security ofwebsites,web applications, andweb services. At a high level, web application security draws on the principles of application security but applies them specifically to theinternetandwebsystems.[2][3]The application security also concentrates onmobile appsand their security which includes iOS and Android Applications Web Application Security Tools are specialized tools for working with HTTP traffic, e.g.,Web application firewalls. Different approaches will find different subsets of the securityvulnerabilitieslurking in an application and are most effective at different times in the software lifecycle. They each represent different tradeoffs of time, effort, cost and vulnerabilities found. The Open Worldwide Application Security Project (OWASP) provides free and open resources. It is led by a non-profit called The OWASP Foundation. The OWASP Top 10 - 2017 results from recent research based on comprehensive data compiled from over 40 partner organizations. This data revealed approximately 2.3 million vulnerabilities across over 50,000 applications.[4]According to the OWASP Top 10 - 2021, the ten most critical web application security risks include:[5] TheOWASP Top 10 Proactive Controls 2024is a list of security techniques every software architect and developer should know and heed. The current list contains: Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave applications open toexploitation. Ideally, security testing is implemented throughout the entiresoftware development life cycle(SDLC) so that vulnerabilities may be addressed in a timely and thorough manner. There are many kinds of automated tools for identifying vulnerabilities in applications. Common tool categories used for identifying application vulnerabilities include:
https://en.wikipedia.org/wiki/Application_security
printkis aprintf-like function of theLinux kernel interfacefor formatting and writing kernel log entries.[1]Since theC standard library(which contains the ubiquitous printf-like functions) is not available inkernel mode,printkprovides for general-purpose output in the kernel.[2]Due to limitations of the kernel design, the function is often used to aiddebuggingkernel modesoftware.[1] printkcan be called from anywhere in the kernel except during early stages of the boot process; before the system console is initialized.[3]The alternative functionearly_printkis implemented on some architectures and is used identically toprintkbut during the early stages of the boot process.[3] printkhas the samesyntaxasprintf, but somewhat differentsemantics. Likeprintf,printkaccepts a formatc-stringargumentand a list of value arguments.[1]Both format text based on the input parameters and with significantly similar behavior, but there are also significant differences.[1]Theprintkfunction prototype(which matches that ofprintf) is: The features different fromprintfare described below. printkallows a caller to specify a log level – the type and importance of the message being sent. The level is specified by prepending text that identifies a log level. Typically the text is prepended via C'sstring literal concatenationand via one of themacrosdesigned for this purpose. For example, a message could be logged at the informational level as:[1] The text specifying the log level consists of theASCIISOHcharacter followed by a digit that identifies the log level or the letter 'c' to indicate the message is a continuation of the previous message.[1][4]The following table lists each log level with its canonical meaning.[3] When no log level is specified, the entry is logged as the default level which is typicallyKERN_WARNING,[1]but can be set; such as via theloglevel=boot argument.[5] Log levels are defined in header file<linux/kern_levels.h>.[4]Which log levels are printed is configured using thesysctlfile/proc/sys/kernel/printk.[1] The%pformat specifier which is supported byprintf, is extended with additional formatting modes. For example, requesting to print astruct sockaddr *using%pISpcformats an IPv4/v6 address and port in a human-friendly format such as1.2.3.4:12345or[1:2:3:4:5:6:7:8]:12345.[6] Whileprintfsupports formattingfloating pointnumbers,printkdoes not,[6]since the Linux kernel does not support floating point numbers.[7] The function tries to lock thesemaphorecontrolling access to theLinux system console.[1][8]If it succeeds, the output is logged and the console drivers are called.[1]If it is not possible to acquire the semaphore the output is placed into the log buffer, and the current holder of the console semaphore will notice the new output when they release the console semaphore and will send the buffered output to the console before releasing the semaphore.[1] One effect of this deferred printing is that code which callsprintkand then changes the log levels to be printed may break. This is because the log level to be printed is inspected when the actual printing occurs.[1]
https://en.wikipedia.org/wiki/Printk
Hit-and-run DDoSis a type ofdenial-of-service (DDoS) attackthat uses short bursts of high volume attacks in random intervals, spanning a time frame of days or weeks. The purpose of a hit-and-run DDoS is to prevent a user of a service from using that service by bringing down the hostserver.[1]This type of attack is to be distinguished from a persistent DDoS attack which continues until the attacker stops the attack or the host server is able to defend against it.[2] A DDoS attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service.[3]A hit-and-run DDoS is accomplished by using high volume network or application attacks in short bursts. The attacks only last long enough to bring down the server hosting the service, normally 20 to 60 minutes. The attack is then repeated every 12 to 24 hours over a period of days or weeks, causing issues for the company hosting the service. Hit-and-run DDoS is sometimes used as a test DDoS attack. An attacker will inject a few badpacketsinto a network to test if it is online and functioning. Once the network is verified as functioning, an attacker will then use a persistent DDoS attack.[4] Hit-and-run DDoS exploits anti-DDoS software and services which are used to defend against prolonged DDoS attacks. Activating such software can take longer than the actual attack, allowing a denial of service before DDoS protection can start to defend from the attack.
https://en.wikipedia.org/wiki/Hit-and-run_DDoS
Inarithmetic dynamics, anarboreal Galois representationis acontinuousgroup homomorphismbetween theabsolute Galois groupof a field and theautomorphism groupof an infinite, regular, rootedtree. The study of arboreal Galois representations of goes back to the works of Odoni in 1980s. LetK{\displaystyle K}be afieldandKsep{\displaystyle K^{sep}}be itsseparable closure. TheGalois groupGK{\displaystyle G_{K}}of the extensionKsep/K{\displaystyle K^{sep}/K}is called theabsolute Galois groupofK{\displaystyle K}. This is aprofinite groupand it is therefore endowed with its natural Krull topology. For a positive integerd{\displaystyle d}, letTd{\displaystyle T^{d}}be the infinite regular rootedtreeof degreed{\displaystyle d}. This is an infinite tree where one node is labeled as the root of the tree and every node has exactlyd{\displaystyle d}descendants. AnautomorphismofTd{\displaystyle T^{d}}is a bijection of the set of nodes that preserves vertex-edge connectivity. The groupAut(Td){\displaystyle Aut(T^{d})}of all automorphisms ofTd{\displaystyle T^{d}}is a profinite group as well, as it can be seen as theinverse limitof the automorphism groups of the finite sub-treesTnd{\displaystyle T_{n}^{d}}formed by all nodes at distance at mostn{\displaystyle n}from the root. The group of automorphisms ofTnd{\displaystyle T_{n}^{d}}is isomorphic toSd≀Sd≀…≀Sd{\displaystyle S_{d}\wr S_{d}\wr \ldots \wr S_{d}}, the iteratedwreath productofn{\displaystyle n}copies of thesymmetric groupof degreed{\displaystyle d}. An arboreal Galois representation is acontinuousgroup homomorphismGK→Aut(Td){\displaystyle G_{K}\to Aut(T^{d})}. The most natural source of arboreal Galois representations is the theory of iterations of self-rational functionson theprojective line. LetK{\displaystyle K}be afieldandf:PK1→PK1{\displaystyle f\colon \mathbb {P} _{K}^{1}\to \mathbb {P} _{K}^{1}}a rational function of degreed{\displaystyle d}. For everyn≥1{\displaystyle n\geq 1}letfn=f∘f∘…∘f{\displaystyle f^{n}=f\circ f\circ \ldots \circ f}be then{\displaystyle n}-fold composition of the mapf{\displaystyle f}with itself. Letα∈K{\displaystyle \alpha \in K}and suppose that for everyn≥1{\displaystyle n\geq 1}the set(fn)−1(α){\displaystyle (f^{n})^{-1}(\alpha )}containsdn{\displaystyle d^{n}}elements of thealgebraic closureK¯{\displaystyle {\overline {K}}}. Then one can construct an infinite, regular, rootedd{\displaystyle d}-ary treeT(f){\displaystyle T(f)}in the following way: the root of the tree isα{\displaystyle \alpha }, and thenodesat distancen{\displaystyle n}fromα{\displaystyle \alpha }are the elements of(fn)−1(α){\displaystyle (f^{n})^{-1}(\alpha )}. A nodeβ{\displaystyle \beta }at distancen{\displaystyle n}fromα{\displaystyle \alpha }is connected with an edge to a nodeγ{\displaystyle \gamma }at distancen+1{\displaystyle n+1}fromα{\displaystyle \alpha }if and only iff(β)=γ{\displaystyle f(\beta )=\gamma }. The absolute Galois groupGK{\displaystyle G_{K}}actsonT(f){\displaystyle T(f)}via automorphisms, and the induced homomorphismρf,α:GK→Aut(T(f)){\displaystyle \rho _{f,\alpha }\colon G_{K}\to Aut(T(f))}is continuous, and therefore is called the arboreal Galois representation attached tof{\displaystyle f}with basepointα{\displaystyle \alpha }. Arboreal representations attached to rational functions can be seen as a wide generalization ofGalois representationsonTate modulesofabelian varieties. The simplest non-trivial case is that of monic quadratic polynomials. LetK{\displaystyle K}be a field ofcharacteristicnot 2, letf=(x−a)2+b∈K[x]{\displaystyle f=(x-a)^{2}+b\in K[x]}and set the basepointα=0{\displaystyle \alpha =0}. Theadjusted post-critical orbitoff{\displaystyle f}is the sequence defined byc1=−f(a){\displaystyle c_{1}=-f(a)}andcn=fn(a){\displaystyle c_{n}=f^{n}(a)}for everyn≥2{\displaystyle n\geq 2}. A resultant argument[1]shows that(fn)−1(0){\displaystyle (f^{n})^{-1}(0)}hasdn{\displaystyle d^{n}}elements for evern{\displaystyle n}if and only ifcn≠0{\displaystyle c_{n}\neq 0}for everyn{\displaystyle n}. In 1992, Stoll proved the following theorem:[2] The following are examples of polynomials that satisfy the conditions of Stoll's Theorem, and that therefore have surjective arboreal representations. In 1985 Odoni formulated the following conjecture.[4] Although in this very general form the conjecture has been shown to be false by Dittmann and Kadets,[5]there are several results whenK{\displaystyle K}is anumber field. Benedetto and Juul proved Odoni's conjecture forK{\displaystyle K}a number field andn{\displaystyle n}even, and also when both[K:Q]{\displaystyle [K:\mathbb {Q} ]}andn{\displaystyle n}are odd,[6]Looper independently proved Odoni's conjecture forn{\displaystyle n}prime andK=Q{\displaystyle K=\mathbb {Q} }.[7] WhenK{\displaystyle K}is aglobal fieldandf∈K(x){\displaystyle f\in K(x)}is a rational function of degree 2, the image ofρf,0{\displaystyle \rho _{f,0}}is expected to be "large" in most cases. The following conjecture quantifies the previous statement, and it was formulated by Jones in 2013.[8] Jones' conjecture is considered to be a dynamical analogue of Serre's open image theorem. One direction of Jones' conjecture is known to be true: iff{\displaystyle f}satisfies one of the above conditions, then[Aut(T(f)):Im(ρf,0)]=∞{\displaystyle [Aut(T(f)):Im(\rho _{f,0})]=\infty }. In particular, whenf{\displaystyle f}is post-critically finite thenIm(ρf,α){\displaystyle Im(\rho _{f,\alpha })}is a topologically finitely generated closed subgroup ofAut(T(f)){\displaystyle Aut(T(f))}for everyα∈K{\displaystyle \alpha \in K}. In the other direction, Juul et al. proved that if theabc conjectureholds for number fields,K{\displaystyle K}is anumber fieldandf∈K[x]{\displaystyle f\in K[x]}is a quadratic polynomial, then[Aut(T(f)):Im(ρf,0)]=∞{\displaystyle [Aut(T(f)):Im(\rho _{f,0})]=\infty }if and only iff{\displaystyle f}ispost-critically finiteor noteventually stable. Whenf∈K[x]{\displaystyle f\in K[x]}is a quadratic polynomial, conditions (2) and (4) in Jones' conjecture are never satisfied. Moreover, Jones and Levy conjectured thatf{\displaystyle f}iseventually stableif and only if0{\displaystyle 0}is not periodic forf{\displaystyle f}.[9] In 2020, Andrews and Petsche formulated the following conjecture.[10] Two pairs(f,α),(g,β){\displaystyle (f,\alpha ),(g,\beta )}, wheref,g∈K(x){\displaystyle f,g\in K(x)}andα,β∈K{\displaystyle \alpha ,\beta \in K}areconjugateover a field extensionL/K{\displaystyle L/K}if there exists aMöbius transformationm=ax+bcx+d∈PGL2(L){\displaystyle m={\frac {ax+b}{cx+d}}\in PGL_{2}(L)}such thatm∘f∘m−1=g{\displaystyle m\circ f\circ m^{-1}=g}andm(α)=β{\displaystyle m(\alpha )=\beta }. Conjugacy is anequivalence relation. The Chebyshev polynomials the conjecture refers to are a normalized version, conjugate by theMöbius transformation2x{\displaystyle 2x}to make them monic. It has been proven that Andrews and Petsche's conjecture holds true whenK=Q{\displaystyle K=\mathbb {Q} }.[11]
https://en.wikipedia.org/wiki/Arboreal_Galois_representation
Ingrammar, apart of speechorpart-of-speech(abbreviatedasPOSorPoS, also known asword class[1]orgrammatical category[2][a]) is a category of words (or, more generally, oflexical items) that have similargrammaticalproperties. Words that are assigned to the same part of speech generally display similarsyntacticbehavior (they play similar roles within the grammatical structure of sentences), sometimes similarmorphologicalbehavior in that they undergoinflectionfor similar properties and even similarsemanticbehavior. Commonly listedEnglishparts of speech arenoun,verb,adjective,adverb,pronoun,preposition,conjunction,interjection,numeral,article, anddeterminer. Other terms thanpart of speech—particularly in modernlinguisticclassifications, which often make more precise distinctions than the traditional scheme does—includeword class,lexical class, andlexical category. Some authors restrict the termlexical categoryto refer only to a particular type ofsyntactic category; for them the term excludes those parts of speech that are considered to befunction words, such as pronouns. The termform classis also used, although this has various conflicting definitions.[3]Word classes may be classified asopen or closed:open classes(typically including nouns, verbs and adjectives) acquire new members constantly, whileclosed classes(such as pronouns and conjunctions) acquire new members infrequently, if at all. Almost all languages have the word classes noun and verb, but beyond these two there are significant variations among different languages.[4]For example: Because of such variation in the number of categories and their identifying properties, analysis of parts of speech must be done for each individual language. Nevertheless, the labels for each category are assigned on the basis of universal criteria.[4] The classification of words into lexical categories is found from the earliest moments in thehistory of linguistics.[5] In theNirukta, written in the 6th or 5th century BCE, theSanskritgrammarianYāskadefined four main categories of words:[6] These four were grouped into two larger classes:inflectable(nouns and verbs) and uninflectable (pre-verbs and particles). The ancient work on the grammar of theTamil language,Tolkāppiyam, argued to have been written around 2nd century CE,[7]classifies Tamil words aspeyar(பெயர்; noun),vinai(வினை; verb),idai(part of speech which modifies the relationships between verbs and nouns), anduri(word that further qualifies a noun or verb).[8] A century or two after the work of Yāska, theGreekscholarPlatowrote in hisCratylusdialogue, "sentences are, I conceive, a combination of verbs [rhêma] and nouns [ónoma]".[9]Aristotleadded another class, "conjunction" [sýndesmos], which included not only the words known today asconjunctions, but also other parts (the interpretations differ; in one interpretation it ispronouns,prepositions, and thearticle).[10] By the end of the 2nd century BCE, grammarians had expanded this classification scheme into eight categories, seen in theArt of Grammar, attributed toDionysius Thrax:[11] It can be seen that these parts of speech are defined bymorphological,syntacticandsemanticcriteria. TheLatingrammarianPriscian(fl.500 CE) modified the above eightfold system, excluding "article" (since theLatin language, unlike Greek, does not have articles) but adding "interjection".[13][14] The Latin names for the parts of speech, from which the corresponding modern English terms derive, werenomen,verbum,participium,pronomen,praepositio,adverbium,conjunctioandinterjectio. The categorynomenincludedsubstantives(nomen substantivum, corresponding to what are today called nouns in English),adjectives(nomen adjectivum)andnumerals(nomen numerale). This is reflected in the older English terminologynoun substantive,noun adjectiveandnoun numeral. Later[15]the adjective became a separate class, as often did the numerals, and the English wordnouncame to be applied to substantives only. Works ofEnglish grammargenerally follow the pattern of the European tradition as described above, except that participles are now usually regarded as forms of verbs rather than as a separate part of speech, and numerals are often conflated with other parts of speech: nouns (cardinal numerals, e.g., "one", andcollective numerals, e.g., "dozen"), adjectives (ordinal numerals, e.g., "first", andmultiplier numerals, e.g., "single") and adverbs (multiplicative numerals, e.g., "once", anddistributive numerals, e.g., "singly"). Eight or nine parts of speech are commonly listed: Some traditional classifications consider articles to be adjectives, yielding eight parts of speech rather than nine. And some modern classifications define further classes in addition to these. For discussion see the sections below. Additionally, there are other parts of speech includingparticles(yes,no)[b]andpostpositions(ago,notwithstanding) although many fewer words are in these categories. The classification below, or slight expansions of it, is still followed in mostdictionaries: English words are not generallymarkedas belonging to one part of speech or another; this contrasts with many other European languages, which useinflectionmore extensively, meaning that a given word form can often be identified as belonging to a particular part of speech and having certain additionalgrammatical properties. In English, most words are uninflected, while the inflected endings that exist are mostly ambiguous:-edmay mark a verbal past tense, a participle or a fully adjectival form;-smay mark a plural noun, a possessive noun, or a present-tense verb form;-ingmay mark a participle,gerund, or pure adjective or noun. Although-lyis a frequent adverb marker, some adverbs (e.g.tomorrow,fast,very) do not have that ending, while many adjectives do have it (e.g.friendly,ugly,lovely), as do occasional words in other parts of speech (e.g.jelly,fly,rely). Many English words can belong to more than one part of speech. Words likeneigh,break,outlaw,laser,microwave, andtelephonemight all be either verbs or nouns. In certain circumstances, even words with primarily grammatical functions can be used as verbs or nouns, as in, "We must look to thehowsand not just thewhys." The process whereby a word comes to be used as a different part of speech is calledconversionor zero derivation. Linguistsrecognize that the above list of eight or nine word classes is drastically simplified.[17]For example, "adverb" is to some extent a catch-all class that includes words with many different functions. Some have even argued that the most basic of category distinctions, that of nouns and verbs, is unfounded,[18]or not applicable to certain languages.[19][20]Modern linguists have proposed many different schemes whereby the words of English or other languages are placed into more specific categories and subcategories based on a more precise understanding of their grammatical functions. Common lexical category set defined by function may include the following (not all of them will necessarily be applicable in a given language): Within a given category, subgroups of words may be identified based on more precise grammatical properties. For example, verbs may be specified according to the number and type ofobjectsor othercomplementswhich they take. This is calledsubcategorization. Many modern descriptions of grammar include not only lexical categories or word classes, but alsophrasal categories, used to classifyphrases, in the sense of groups of words that form units having specific grammatical functions. Phrasal categories may includenoun phrases(NP),verb phrases(VP) and so on. Lexical and phrasal categories together are calledsyntactic categories. Word classes may be either open or closed. Anopen classis one that commonly accepts the addition of new words, while aclosed classis one to which new items are very rarely added. Open classes normally contain large numbers of words, while closed classes are much smaller. Typical open classes found in English and many other languages arenouns,verbs(excludingauxiliary verbs, if these are regarded as a separate class),adjectives,adverbsandinterjections.Ideophonesare often an open class, though less familiar to English speakers,[21][22][c]and are often open tononce words. Typical closed classes areprepositions(or postpositions),determiners,conjunctions, andpronouns.[24] The open–closed distinction is related to the distinction betweenlexical and functional categories, and to that betweencontent wordsandfunction words, and some authors consider these identical, but the connection is not strict. Open classes are generally lexical categories in the stricter sense, containing words with greater semantic content,[25]while closed classes are normally functional categories, consisting of words that perform essentially grammatical functions. This is not universal: in many languages verbs and adjectives[26][27][28]are closed classes, usually consisting of few members, and in Japanese the formation of new pronouns from existing nouns is relatively common, though to what extent these form a distinct word class is debated. Words are added to open classes through such processes ascompounding,derivation,coining, andborrowing. When a new word is added through some such process, it can subsequently be used grammatically in sentences in the same ways as other words in its class.[29]A closed class may obtain new items through these same processes, but such changes are much rarer and take much more time. A closed class is normally seen as part of the core language and is not expected to change. In English, for example, new nouns, verbs, etc. are being added to the language constantly (including by the common process ofverbingand other types ofconversion, where an existing word comes to be used in a different part of speech). However, it is very unusual for a new pronoun, for example, to become accepted in the language, even in cases where there may be felt to be a need for one, as in the case ofgender-neutral pronouns. The open or closed status of word classes varies between languages, even assuming that corresponding word classes exist. Most conspicuously, in many languages verbs and adjectives form closed classes of content words. An extreme example is found inJingulu, which has only three verbs, while even the modern Indo-EuropeanPersianhas no more than a few hundred simple verbs, a great deal of which are archaic. (Some twenty Persian verbs are used aslight verbsto form compounds; this lack of lexical verbs is shared with other Iranian languages.) Japanese is similar, having few lexical verbs.[30][failed verification]Basque verbsare also a closed class, with the vast majority of verbal senses instead expressed periphrastically. InJapanese, verbs and adjectives are closed classes,[31]though these are quite large, with about 700 adjectives,[32][33]and verbs have opened slightly in recent years.Japanese adjectivesare closely related to verbs (they can predicate a sentence, for instance). New verbal meanings are nearly always expressed periphrastically by appendingsuru(する, to do)to a noun, as inundō suru(運動する, to (do) exercise), and new adjectival meanings are nearly always expressed byadjectival nouns, using the suffix-na(〜な)when an adjectival noun modifies a noun phrase, as inhen-na ojisan(変なおじさん, strange man). The closedness of verbs has weakened in recent years, and in a few cases new verbs are created by appending-ru(〜る)to a noun or using it to replace the end of a word. This is mostly in casual speech for borrowed words, with the most well-established example beingsabo-ru(サボる, cut class; play hooky), fromsabotāju(サボタージュ, sabotage).[34]This recent innovation aside, the huge contribution ofSino-Japanese vocabularywas almost entirely borrowed as nouns (often verbal nouns or adjectival nouns). Other languages where adjectives are closed class include Swahili,[28]Bemba, andLuganda. By contrast,Japanese pronounsare an open class and nouns become used as pronouns with some frequency; a recent example isjibun(自分, self), now used by some as a first-person pronoun. The status of Japanese pronouns as a distinct class is disputed, however, with some considering it only a use of nouns, not a distinct class. The case is similar in languages of Southeast Asia, including Thai and Lao, in which, like Japanese, pronouns and terms of address vary significantly based on relative social standing and respect.[35] Some word classes are universally closed, however, including demonstratives and interrogative words.[35]
https://en.wikipedia.org/wiki/Part_of_speech
ChaCha20-Poly1305is anauthenticated encryption with associated data (AEAD)algorithm, that combines theChaCha20stream cipher with thePoly1305message authentication code.[1]It has fast software performance, and without hardware acceleration, is usually faster thanAES-GCM.[1]: §B The two building blocks of the construction, the algorithms Poly1305 and ChaCha20, were both independently designed, in 2005 and 2008, byDaniel J. Bernstein.[2][3] In March 2013, a proposal was made to the IETF TLS working group to includeSalsa20, a winner of theeSTREAMcompetition[4]to replace the aging RC4-based ciphersuites. A discussion followed in the IETF TLS mailing list with various enhancement suggestions, including using Chacha20 instead of Salsa20 and using a universal hashing based MAC for performance. The outcome of this process was the adoption of Adam Langley's proposal for a variant of the original ChaCha20 algorithm (using 32-bit counter and 96-bit nonce) and a variant of the original Poly1305 (authenticating 2 strings) being combined in an IETF draft[5][6]to be used inTLSandDTLS,[7]and chosen, for security and performance reasons, as a newly supported cipher.[8]Shortly after IETF's adoption for TLS, ChaCha20, Poly1305 and the combined AEAD mode are added toOpenSSHvia thechacha20-poly1305@openssh.comauthenticated encryption cipher[9][10]but kept the original 64-bit counter and 64-bit nonce for the ChaCha20 algorithm. In 2015, the AEAD algorithm was standardized in RFC 7539[11]and in RFC 7634[12]to be used in IPsec. The same year, it was integrated by Cloudflare as an alternative ciphersuite.[13] In 2016 RFC 7905[14]describes how to use it in the TLS 1.2 and DTLS 1.2 protocols. In June 2018, RFC 7539 was updated and replaced by RFC 8439.[1] The ChaCha20-Poly1305 algorithm takes as input a 256-bit key and a 96-bitnonceto encrypt a plaintext,[1]with a ciphertext expansion of 128-bit (the tag size). In the ChaCha20-Poly1305 construction, ChaCha20 is used in counter mode to derive a key stream that isXORedwith the plaintext. The ciphertext and the associated data is then authenticated using a variant of Poly1305 that first encodes the two strings into one. The way that a cipher and a one time authenticator are combined is precisely identical toAES-GCMconstruction in how the first block is used to seed the authenticator and how the ciphertext is then authenticated with a 16-byte tag. The main external difference with ChaCha20 is its 64 byte (512 bit) block size, in comparison to 16 bytes (128 bit) with both AES-128 and AES-256. The larger block size enables higher performance on modern CPUs and allows for larger streams before the 32 bit counter overflows. The XChaCha20-Poly1305 construction is an extended 192-bit nonce variant of the ChaCha20-Poly1305 construction, usingXChaCha20instead ofChaCha20. When choosing nonces at random, the XChaCha20-Poly1305 construction allows for better security than the original construction. The draft attempt to standardize the construction expired in July 2020.[15] Salsa20-Poly1305 and XSalsa20-Poly1305 are variants of the ChaCha20-Poly1305 andXChaCha20-Poly1305algorithms, usingSalsa20andXSalsa20in place of ChaCha20 and XChaCha20. They are implemented inNaCl[16]and libsodium[17]but not standardized. The variants using ChaCha are preferred in practice as they provide betterdiffusionper round than Salsa.[2] ChaCha20 can be replaced with its reduced-round variants ChaCha12 and ChaCha8, yielding ChaCha12-Poly1305 and ChaCha8-Poly1305. The same modification can be applied to XChaCha20-Poly1305. These are implemented by the RustCrypto team and not standardized.[18] ChaCha20-Poly1305 is used inIPsec,[1]SSH,[19]TLS 1.2,DTLS1.2,TLS 1.3,[14][19]WireGuard,[20]S/MIME 4.0,[21]OTRv4[22]and multiple other protocols and implemented inOpenSSLandlibsodium. Additionally, the algorithm is used in the backup softwareBorg[23]in order to provide standard data encryption and in thecopy-on-writefilesystemBcachefsfor the purpose of optional whole filesystem encryption.[24] ChaCha20-Poly1305 usually offers better performance than the more prevalentAES-GCMalgorithm, except on systems where the CPU(s) have theAES-NI instruction setextension[1]. As a result, ChaCha20-Poly1305 is sometimes preferred over AES-GCM due to its similar levels of security and in certain use cases involvingmobile devices, which mostly useARM-based CPUs. Because ChaCha20-Poly1305 has less overhead than AES-GCM, ChaCha20-Poly1305 on mobile devices may consume less power than AES-GCM. The ChaCha20-Poly1305 construction is generally secure in thestandard modeland theideal permutation model, for the single- and multi-user setting.[25]However, similarly toGCM, the security relies on choosing a uniquenoncefor every message encrypted. Compared to AES-GCM, implementations of ChaCha20-Poly1305 are less vulnerable totiming attacks. To be noted, when theSSHprotocol uses ChaCha20-Poly1305 as underlying primitive, it is vulnerable to theTerrapin attack.
https://en.wikipedia.org/wiki/ChaCha20-Poly1305
This is alist ofcomplexity classesincomputational complexity theory. For other computational and complexity subjects, seelist of computability and complexity topics. Many of these classes have a 'co' partner which consists of thecomplementsof all languages in the original class. For example, if a language L is in NP then the complement of L is in co-NP. (This does not mean that the complement of NP is co-NP—there are languages which are known to be in both, and other languages which are known to be in neither.) "The hardest problems" of a class refer to problems which belong to the class such that every other problem of that class can be reduced to it.
https://en.wikipedia.org/wiki/List_of_complexity_classes
Recurrent neural networks(RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, andtime series,[1]where the order of elements is important. Unlikefeedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences. The fundamental building block of RNNs is therecurrent unit, which maintains ahidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connectedhandwriting recognition,[2]speech recognition,[3][4]natural language processing, andneural machine translation.[5][6] However, traditional RNNs suffer from thevanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of thelong short-term memory(LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later,Gated Recurrent Units(GRUs) were introduced as a more computationally efficient alternative. In recent years,transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial. One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortexformed byparallel fiber,Purkinje cells, andgranule cells.[7][8]In 1933,Lorente de Nódiscovered "recurrent, reciprocal connections" byGolgi's method, and proposed that excitatory loops explain certain aspects of thevestibulo-ocular reflex.[9][10]During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure.Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[11]The McCulloch and Pitts paper (1943), which proposed theMcCulloch-Pitts neuronmodel, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past.[12]They were both interested in closed loops as possible explanations for e.g.epilepsyandcausalgia.[13][14]Recurrent inhibitionwas proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at theMacy conferences.[15]See[16]for an extensive review of recurrent neural network models in neuroscience. Frank Rosenblattin 1960 published "close-loop cross-coupled perceptrons", which are 3-layeredperceptronnetworks whose middle layer contains recurrent connections that change by aHebbian learningrule.[18]: 73–75Later, inPrinciples of Neurodynamics(1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,[17]: Chapter 19, 21and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.[17]: Section 19.11 Similar networks were published by Kaoru Nakano in 1971,[19][20]Shun'ichi Amariin 1972,[21]andWilliam A. Little[de]in 1974,[22]who was acknowledged by Hopfield in his 1982 paper. Another origin of RNN wasstatistical mechanics. TheIsing modelwas developed byWilhelm Lenz[23]andErnst Ising[24]in the 1920s[25]as a simple statistical mechanical model of magnets at equilibrium.Glauberin 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time.[26] TheSherrington–Kirkpatrick modelof spin glass, published in 1975,[27]is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.[28]In a 1984 paper he extended this to continuous activation functions.[29]It became a standard model for the study of neural networks through statistical mechanics.[30][31] Modern RNN networks are mainly based on two architectures: LSTM and BRNN.[32] At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets".[33]Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[34] Long short-term memory(LSTM) networks were invented byHochreiterandSchmidhuberin 1995 and set accuracy records in multiple applications domains.[35][36]It became the default choice for RNN architecture. Bidirectional recurrent neural networks(BRNN) uses two RNN that processes the same input in opposite directions.[37]These two are often combined, giving the bidirectional LSTM architecture. Around 2006, bidirectional LSTM started to revolutionizespeech recognition, outperforming traditional models in certain speech applications.[38][39]They also improved large-vocabulary speech recognition[3][4]andtext-to-speechsynthesis[40]and was used inGoogle voice search, and dictation onAndroid devices.[41]They broke records for improvedmachine translation,[42]language modeling[43]and Multilingual Language Processing.[44]Also, LSTM combined withconvolutional neural networks(CNNs) improvedautomatic image captioning.[45] The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014.[46][47]Aseq2seqarchitecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development ofattention mechanismsandtransformers. An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc. RNNs come in many variants. Abstractly speaking, an RNN is a functionfθ{\displaystyle f_{\theta }}of type(xt,ht)↦(yt,ht+1){\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})}, where In words, it is a neural network that maps an inputxt{\displaystyle x_{t}}into an outputyt{\displaystyle y_{t}}, with the hidden vectorht{\displaystyle h_{t}}playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to belayersare, in fact, different steps in time, "unfolded" to produce the appearance oflayers. Astacked RNN, ordeep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN. Abidirectional RNN(biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows: The two output sequences are then concatenated to give the total output:((y0,y0′),(y1,y1′),…,(yN,yN′)){\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))}. Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. TheELMomodel (2018)[48]is a stacked bidirectionalLSTMwhich takes character-level as inputs and produces word-level embeddings. Two RNNs can be run front-to-back in anencoder-decoderconfiguration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optionalattention mechanism. This was used to construct state of the artneural machine translatorsduring the 2014–2017 period. This was an instrumental step towards the development oftransformers.[49] An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions.[50]For example, the row-by-row direction processes ann×n{\displaystyle n\times n}grid of vectorsxi,j{\displaystyle x_{i,j}}in the following order:x1,1,x1,2,…,x1,n,x2,1,x2,2,…,x2,n,…,xn,n{\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}}Thediagonal BiLSTMuses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processesxi,j{\displaystyle x_{i,j}}depending on its hidden state and cell state on the top and the left side:hi−1,j,ci−1,j{\displaystyle h_{i-1,j},c_{i-1,j}}andhi,j−1,ci,j−1{\displaystyle h_{i,j-1},c_{i,j-1}}. The other processes it from the top-right corner to the bottom-left. Fully recurrent neural networks(FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is afully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons. TheHopfield networkis an RNN in which all connections across layers are equally sized. It requiresstationaryinputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained usingHebbian learning, then the Hopfield network can perform asrobustcontent-addressable memory, resistant to connection alteration. AnElmannetworkis a three-layer network (arranged horizontally asx,y, andzin the illustration) with the addition of a set of context units (uin the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.[51]At each time step, the input is fed forward and alearning ruleis applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standardmultilayer perceptron. Jordannetworksare similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves.[51] Elman and Jordan networks are also known as "Simple recurrent networks" (SRN). Variables and functions Long short-term memory(LSTM) is the most widely used RNN architecture. It was designed to solve thevanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates".[54]LSTM prevents backpropagated errors from vanishing or exploding.[55]Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.[56]LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components. Many applications use stacks of LSTMs,[57]for which it is called "deep LSTM". LSTM can learn to recognizecontext-sensitive languagesunlike previous models based onhidden Markov models(HMM) and similar concepts.[58] Gated recurrent unit(GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants.[59][60]They have fewer parameters than LSTM, as they lack an output gate.[61] Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.[62]There does not appear to be particular performance difference between LSTM and GRU.[62][63] Introduced by Bart Kosko,[64]a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and itstranspose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models usingMarkovstepping were optimized for increased network stability and relevance to real-world applications.[65] A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.[66] Echo state networks(ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certaintime series.[67]A variant forspiking neuronsis known as aliquid state machine.[68] Arecursive neural network[69]is created by applying the same set of weightsrecursivelyover a differentiable graph-like structure by traversing the structure intopological order. Such networks are typically also trained by the reverse mode ofautomatic differentiation.[70][71]They can processdistributed representationsof structure, such aslogical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied tonatural language processing.[72]The Recursive Neural Tensor Network uses atensor-based composition function for all nodes in the tree.[73] Neural Turing machines(NTMs) are a method of extending recurrent neural networks by coupling them to externalmemoryresources with which they interact. The combined system is analogous to aTuring machineorVon Neumann architecturebut isdifferentiableend-to-end, allowing it to be efficiently trained withgradient descent.[74] Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.[75] Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers ofcontext free grammars(CFGs).[76] Recurrent neural networks areTuring completeand can run arbitrary programs to process arbitrary sequences of inputs.[77] An RNN can be trained into a conditionallygenerative modelof sequences, akaautoregression. Concretely, let us consider the problem of machine translation, that is, given a sequence(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}of English words, the model is to produce a sequence(y1,…,ym){\displaystyle (y_{1},\dots ,y_{m})}of French words. It is to be solved by aseq2seqmodel. Now, during training, the encoder half of the model would first ingest(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}, then the decoder half would start generating a sequence(y^1,y^2,…,y^l){\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})}. The problem is that if the model makes a mistake early on, say aty^2{\displaystyle {\hat {y}}_{2}}, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shifty^2{\displaystyle {\hat {y}}_{2}}towardsy2{\displaystyle y_{2}}, but not the others. Teacher forcingmakes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see(y1,…,yk){\displaystyle (y_{1},\dots ,y_{k})}in order to generatey^k+1{\displaystyle {\hat {y}}_{k+1}}. Gradient descent is afirst-orderiterativeoptimizationalgorithmfor finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linearactivation functionsaredifferentiable. The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm ofbackpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[78][79]which is an instance ofautomatic differentiationin the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space. In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.[80][81] For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing theJacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.[82]An online hybrid between BPTT and RTRL with intermediate complexity exists,[83][84]along with variants for continuous time.[85] A major problem with gradient descent for standard RNN architectures is thaterror gradients vanishexponentially quickly with the size of the time lag between important events.[55][86]LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.[36]This problem is also solved in the independently recurrent neural network (IndRNN)[87]by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem. The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.[88]It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback. One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.[89]It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.[90]It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.[90] Theconnectionist temporal classification(CTC)[91]is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.[92] Training the weights in a neural network can be modeled as a non-linearglobal optimizationproblem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function. The most common global optimization method for training RNNs isgenetic algorithms, especially in unstructured networks.[93][94][95] Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in thechromosomerepresents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows: Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is: The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error. Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such assimulated annealingorparticle swarm optimization. The independently recurrent neural network (IndRNN)[87]addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections. The neural history compressor is an unsupervised stack of RNNs.[96]At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level. The system effectively minimizes the description length or the negativelogarithmof the probability of the data.[97]Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events. It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).[96]Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.[96] Agenerative modelpartially overcame thevanishing gradient problem[55]ofautomatic differentiationorbackpropagationin neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.[34] Second-order RNNs use higher order weightswijk{\displaystyle w{}_{ijk}}instead of the standardwij{\displaystyle w{}_{ij}}weights, and states can be a product. This allows a direct mapping to afinite-state machineboth in training, stability, and representation.[98][99]Long short-term memory is an example of this but has no such formal mappings or proof of stability. Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.[96][100]Such hierarchical structures of cognition are present in theories of memory presented by philosopherHenri Bergson, whose philosophical views have inspired hierarchical models.[101] Hierarchical recurrent neural networks are useful inforecasting, helping to predict disaggregated inflation components of theconsumer price index(CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various establishedinflationprediction methods.[102] Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.[103] A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.[104][105]With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in thememory-predictiontheory of brain function byHawkinsin his bookOn Intelligence.[citation needed]Such a hierarchy also agrees with theories of memory posited by philosopherHenri Bergson, which have been incorporated into an MTRNN model.[101][106] Greg Snider ofHP Labsdescribes a system of cortical computing with memristive nanodevices.[107]Thememristors(memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film.DARPA'sSyNAPSE projecthas funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.Memristive networksare a particular type ofphysical neural networkthat have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to theIsing model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type ofneuromorphic engineeringin which the device behavior depends on the circuit wiring or topology. The evolution of these networks can be studied analytically using variations of theCaravelli–Traversa–Di Ventraequation.[108] A continuous-time recurrent neural network (CTRNN) uses a system ofordinary differential equationsto model the effects on a neuron of the incoming inputs. They are typically analyzed bydynamical systems theory. Many RNN models in neuroscience are continuous-time.[16] For a neuroni{\displaystyle i}in the network with activationyi{\displaystyle y_{i}}, the rate of change of activation is given by: Where: CTRNNs have been applied toevolutionary roboticswhere they have been used to address vision,[109]co-operation,[110]and minimal cognitive behaviour.[111] Note that, by theShannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalentdifference equations.[112]This transformation can be thought of as occurring after the post-synaptic node activation functionsyi(t){\displaystyle y_{i}(t)}have been low-pass filtered but prior to sampling. They are in factrecursive neural networkswith a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. From a time-series perspective, RNNs can appear as nonlinear versions offinite impulse responseandinfinite impulse responsefilters and also as anonlinear autoregressive exogenous model(NARX).[113]RNN has infinite impulse response whereasconvolutional neural networkshavefinite impulseresponse. Both classes of networks exhibit temporaldynamic behavior.[114]A finite impulse recurrent network is adirected acyclic graphthat can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is adirected cyclic graphthat cannot be unrolled. The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity.[115] Additional stored states and the storage under direct control by the network can be added to bothinfinite-impulseandfinite-impulsenetworks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part oflong short-term memorynetworks (LSTMs) andgated recurrent units. This is also called Feedback Neural Network (FNN). Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop byjust-in-time compilation. Applications of recurrent neural networks include:
https://en.wikipedia.org/wiki/Recurrent_neural_network
TheSimple English Wikipediais a modifiedEnglish-languageedition ofWikipediawritten primarily inBasic EnglishandLearning English.[3]It is one of sevenWikipediaswritten in anAnglic languageor English-basedpidginorcreole. The site has the stated aim of providing an encyclopedia for "people with different needs, such as students, children, adults withlearning difficulties, and people who are trying to learnEnglish."[4] Simple English Wikipedia's basic presentation style makes it helpful for beginners learning English.[5]Its simpler word structure and syntax, while missing some nuances, can make information easier to understand when compared with the regularEnglish Wikipedia. The Simple English Wikipedia was launched on September 18, 2001.[1][2] In 2012,Andrew Lih, aWikipedianand author, toldNBC News' Helen A.S. Popkin that the Simple English Wikipedia does not "have a high standing in theWikipedia community", and added that it never had a clear purpose: "Is it for people under the age 14, or just a simpler version of complex articles?", wrote Popkin.[6] Material from the Simple English Wikipedia formed the basis for One Encyclopedia per Child,[7]a project inOne Laptop per Child[8]that ended in 2014.[9] In 2018, it was proposed for closure due to a claim that no proof exists that thetarget audiencewas catered to, but the proposal was rejected due to unjustified policies and lack of approval.[10] As of May 2025, the site contains over 269,000 content pages. It has more than 1,597,000 registered users, of whom 1,737 have made an edit in the past month.[11] The articles on the Simple English Wikipedia are usually shorter than theirEnglish Wikipediacounterparts, typically presenting only basic information. Tim Dowling ofThe Guardiannewspaper explained that "the Simple English version tends to stick to commonly accepted facts".[12]The interface is also more simply labeled; for instance, the "Random article" link on the English Wikipedia is replaced with a "Show any page" link; users are invited to "change" rather than "edit" pages; clicking on ared linkshows a "page not created" message rather than the usual "page does not exist".[13]The project encourages, but does not enforce, the use of a vocabulary of around 1,500 commonly used English words[3]that is based onBasic English, an 850-word controlled natural language created byCharles Kay Ogdenin the 1920s.[12]
https://en.wikipedia.org/wiki/Simple_English_Wikipedia
Incomputing,firmwareissoftwarethat provideslow-levelcontrol ofcomputing devicehardware. For a relatively simple device, firmware may perform all control, monitoring and data manipulation functionality. For a more complex device, firmware may provide relatively low-level control as well ashardware abstractionservicesto higher-level software such as anoperating system. Firmware is found in a wide range of computing devices includingpersonal computers,smartphones,home appliances,vehicles,computer peripheralsand in many of theintegrated circuitsinside each of these larger systems. Firmware is stored innon-volatile memory– eitherread-only memory(ROM) or programmable memory such asEPROM,EEPROM, orflash. Changing a device's firmware stored in ROM requires physically replacing the memory chip – although some chips are not designed to be removed after manufacture. Programmable firmware memory can be reprogrammed via a procedure sometimes calledflashing.[2] Common reasons forchangingfirmware include fixingbugsand addingfeatures. Ascher Opler used the termfirmwarein a 1967Datamationarticle, as an intermediary term betweenhardwareandsoftware. Opler projected that fourth-generation computer systems would have awritable control store(a small specialized high-speed memory) into whichmicrocodefirmware would be loaded. Many software functions would be moved to microcode, andinstruction setscould be customized, with different firmware loaded for different instruction sets.[3] As computers began to increase in complexity, it became clear that various programs needed to first be initiated and run to provide a consistent environment necessary for running more complex programs at the user's discretion. This required programming the computer to run those programs automatically. Furthermore, as companies, universities, and marketers wanted to sell computers to laypeople with little technical knowledge, greater automation became necessary to allow a lay-user to easily run programs for practical purposes. This gave rise to a kind of software that a user would not consciously run, and it led to software that a lay user wouldn't even know about.[4] As originally used, firmware contrasted with hardware (the CPU itself) and software (normal instructions executing on a CPU). It was not composed of CPU machine instructions, but of lower-level microcode involved in the implementation of machine instructions. It existed on the boundary between hardware and software; thus the namefirmware. Over time, popular usage extended the wordfirmwareto denote any computer program that is tightly linked to hardware, includingBIOSon PCs,boot firmwareon smartphones,computer peripherals, or the control systems on simpleconsumer electronic devicessuch asmicrowave ovensandremote controls. In some respects, the various firmware components are as important as theoperating systemin a working computer. However, unlike most modern operating systems, firmware rarely has a well-evolved automatic mechanism of updating itself to fix any functionality issues detected after shipping the unit. A computer's firmware may be manually updated by a user via a small utility program. In contrast, firmware in mass storage devices (hard-disk drives, optical disc drives, flash memory storage e.g. solid state drive) is less frequently updated, even when flash memory (rather than ROM, EEPROM) storage is used for the firmware. Most computer peripherals are themselves special-purpose computers. Devices such as printers, scanners, webcams, andUSB flash driveshave internally-stored firmware; some devices may also permit field upgrading of their firmware. For modern simpler devices, such asUSB keyboards,USB mousesandUSB sound cards, the trend is to store the firmware in on-chip memory in the device'smicrocontroller, as opposed to storing it in a separateEEPROMchip. Examples of computer firmware include: Consumer appliances likegaming consoles,digital camerasandportable music playerssupport firmware upgrades. Some companies use firmware updates to add new playable file formats (codecs). Other features that may change with firmware updates include the GUI or even the battery life.Smartphoneshave afirmware over the airupgrade capability for adding new features and patching security issues. Since 1996, mostautomobileshave employed an on-board computer and various sensors to detect mechanical problems. As of 2010[update], modern vehicles also employ computer-controlledanti-lock braking systems(ABS) and computer-operatedtransmission control units(TCUs). The driver can also get in-dash information while driving in this manner, such as real-time fuel economy and tire pressure readings. Local dealers can update most vehicle firmware. Other firmware applications include: Flashing[6]is a process that involves the overwriting of existing firmware or data, contained inEEPROMorflash memorymodule present in an electronic device, with new data.[6]This can be done to upgrade a device[7]or to change the provider of a service associated with the function of the device, such as changing from one mobile phone service provider to another or installing a new operating system. If firmware is upgradable, it is often done via a program from the provider, and will often allow the old firmware to be saved before upgrading so it can be reverted to if the process fails, or if the newer version performs worse. Free software replacements for vendor flashing tools have been developed, such asFlashrom. Sometimes, third parties develop an unofficial new or modified ("aftermarket") version of firmware to provide new features or to unlock hidden functionality; this is referred to ascustom firmware. An example isRockboxas a firmware replacement forportable media players. There are manyhomebrewprojects for various devices, which often unlock general-purpose computing functionality in previously limited devices (e.g., runningDoomoniPods). Firmware hacks usually take advantage of the firmware update facility on many devices to install or run themselves. Some, however, must resort toexploitsto run, because the manufacturer has attempted to lock the hardware to stop it from runningunlicensed code. Most firmware hacks arefree software. The Moscow-basedKaspersky Labdiscovered that a group of developers it refers to as theEquation Grouphas developedhard disk drivefirmware modifications for various drive models, containing atrojan horsethat allows data to be stored on the drive in locations that will not be erased even if the drive is formatted or wiped.[8]Although the Kaspersky Lab report did not explicitly claim that this group is part of the United StatesNational Security Agency(NSA), evidence obtained from the code of various Equation Group software suggests that they are part of the NSA.[9][10] Researchers from the Kaspersky Lab categorized the undertakings by Equation Group as the most advanced hacking operation ever uncovered, also documenting around 500 infections caused by the Equation Group in at least 42 countries. Mark Shuttleworth, the founder of the companyCanonical, which created theUbuntu Linuxdistribution, has describedproprietaryfirmware as a security risk, saying that "firmware on your device is theNSA's best friend" and calling firmware "a trojan horse of monumental proportions". He has asserted that low-quality,closed sourcefirmware is a major threat to system security:[11]"Your biggest mistake is to assume that the NSA is the only institution abusing this position of trust – in fact, it's reasonable to assume that all firmware is a cesspool of insecurity, courtesy of incompetence of the highest degree from manufacturers, and competence of the highest degree from a very wide range of such agencies". As a potential solution to this problem, he has called for declarative firmware, which would describe "hardware linkage and dependencies" and "should not includeexecutable code".[12]Firmware should beopen-sourceso that the code can be checked and verified. Custom firmware hacks have also focused on injectingmalwareinto devices such as smartphones orUSB devices. One such smartphone injection was demonstrated on theSymbian OSatMalCon,[13][14]ahacker convention. A USB device firmware hack calledBadUSBwas presented at theBlack Hat USA 2014conference,[15]demonstrating how aUSB flash drivemicrocontroller can be reprogrammed to spoof various other device types to take control of a computer, exfiltrate data, or spy on the user.[16][17]Other security researchers have worked further on how to exploit the principles behind BadUSB,[18]releasing at the same time the source code of hacking tools that can be used to modify the behavior of different USB devices.[19]
https://en.wikipedia.org/wiki/Firmware
Indifferential calculus,related ratesproblems involve finding a rate at which a quantity changes byrelatingthat quantity to other quantities whose rates of change are known. The rate of change is usually with respect totime. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of thechain rule,[1]since most problems involve several variables. Fundamentally, if a functionF{\displaystyle F}is defined such thatF=f(x){\displaystyle F=f(x)}, then the derivative of the functionF{\displaystyle F}can be taken with respect to another variable. We assumex{\displaystyle x}is a function oft{\displaystyle t}, i.e.x=g(t){\displaystyle x=g(t)}. ThenF=f(g(t)){\displaystyle F=f(g(t))}, so Written in Leibniz notation, this is: Thus, if it is known howx{\displaystyle x}changes with respect tot{\displaystyle t}, then we can determine howF{\displaystyle F}changes with respect tot{\displaystyle t}and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc. For example, ifF(x)=G(y)+H(z){\displaystyle F(x)=G(y)+H(z)}then The most common way to approach related rates problems is the following:[2] Errors in this procedure are often caused by plugging in the known values for the variablesbefore(rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in. A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall? The distance between the base of the ladder and the wall,x, and the height of the ladder on the wall,y, represent the sides of aright trianglewith the ladder as the hypotenuse,h. The objective is to finddy/dt, the rate of change ofywith respect to time,t, whenh,xanddx/dt, the rate of change ofx, are known. Step 1: Step 2: From thePythagorean theorem, the equation describes the relationship betweenx,yandh, for a right triangle. Differentiating both sides of this equation with respect to time,t, yields Step 3: When solved for the wanted rate of change,dy/dt, gives us Step 4 & 5: Using the variables from step 1 gives us: Solving for y using the Pythagorean Theorem gives: Plugging in 8 for the equation: It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of⁠9/4⁠meters per second. Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rateskinematicsandelectromagnetic induction. For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection. Big idea:use chain rule to compute rate of change of distance between two vehicles. Plan: Choose coordinate system:Let they-axis point North and thex-axis point East. Identify variables:Definey(t) to be the distance of the vehicle heading North from the origin andx(t) to be the distance of the vehicle heading West from the origin. Expresscin terms ofxandyvia the Pythagorean theorem: Expressdc/dtusing chain rule in terms ofdx/dtanddy/dt: Substitute inx= 4 mi,y= 3 mi,dx/dt= −80 mi/hr,dy/dt= 60 mi/hrand simplify Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr. Themagnetic fluxthrough a loop of areaAwhose normal is at an angleθto a magnetic field of strengthBis Faraday's lawof electromagnetic induction states that the inducedelectromotive forceE{\displaystyle {\mathcal {E}}}is the negative rate of change of magnetic fluxΦB{\displaystyle \Phi _{B}}through a conducting loop. If the loop areaAand magnetic fieldBare held constant, but the loop is rotated so that the angleθis a known function of time, the rate of change ofθcan be related to the rate of change ofΦB{\displaystyle \Phi _{B}}(and therefore the electromotive force) by taking the time derivative of the flux relation If for example, the loop is rotating at a constant angular velocityω, so thatθ=ωt, then
https://en.wikipedia.org/wiki/Related_rates
Pell's equation, also called thePell–Fermat equation, is anyDiophantine equationof the formx2−ny2=1,{\displaystyle x^{2}-ny^{2}=1,}wherenis a given positivenonsquareinteger, and integer solutions are sought forxandy. InCartesian coordinates, the equation is represented by ahyperbola; solutions occur wherever the curve passes through a point whosexandycoordinates are both integers, such as thetrivial solutionwithx= 1 andy= 0.Joseph Louis Lagrangeproved that, as long asnis not aperfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accuratelyapproximatethesquare rootofnbyrational numbersof the formx/y. This equation was first studied extensivelyin Indiastarting withBrahmagupta,[1]who found an integer solution to92x2+1=y2{\displaystyle 92x^{2}+1=y^{2}}in hisBrāhmasphuṭasiddhāntacirca 628.[2]Bhaskara IIin the 12th century andNarayana Panditin the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing thechakravalamethod, building on the work ofJayadevaand Brahmagupta. Solutions to specific examples of Pell's equation, such as thePell numbersarising from the equation withn= 2, had been known for much longer, since the time ofPythagorasinGreeceand a similar date in India.William Brounckerwas the first European to solve Pell's equation. The name of Pell's equation arose fromLeonhard Eulermistakenly attributing Brouncker's solution of the equation toJohn Pell.[3][4][note 1] As early as 400 BC inIndiaandGreece, mathematicians studied the numbers arising from then= 2 case of Pell's equation,x2−2y2=1,{\displaystyle x^{2}-2y^{2}=1,}and from the closely related equationx2−2y2=−1{\displaystyle x^{2}-2y^{2}=-1}because of the connection of these equations to thesquare root of 2.[5]Indeed, ifxandyarepositive integerssatisfying this equation, thenx/yis an approximation of√2. The numbersxandyappearing in these approximations, calledside and diameter numbers, were known to thePythagoreans, andProclusobserved that in the opposite direction these numbers obeyed one of these two equations.[5]Similarly,Baudhayanadiscovered thatx= 17,y= 12 andx= 577,y= 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2.[6] Later,Archimedesapproximated thesquare root of 3by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation.[5]Likewise,Archimedes's cattle problem— an ancientword problemabout finding the number of cattle belonging to the sun godHelios— can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter toEratosthenes,[7]and the attribution to Archimedes is generally accepted today.[8][9] Around AD 250,Diophantusconsidered the equationa2x2+c=y2,{\displaystyle a^{2}x^{2}+c=y^{2},}whereaandcare fixed numbers, andxandyare the variables to be solved for. This equation is different in form from Pell's equation but equivalent to it. Diophantus solved the equation for (a,c) equal to (1, 1), (1, −1), (1, 12), and (3, 9).Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus.[10] In Indian mathematics,Brahmaguptadiscovered that(x12−Ny12)(x22−Ny22)=(x1x2+Ny1y2)2−N(x1y2+x2y1)2,{\displaystyle (x_{1}^{2}-Ny_{1}^{2})(x_{2}^{2}-Ny_{2}^{2})=(x_{1}x_{2}+Ny_{1}y_{2})^{2}-N(x_{1}y_{2}+x_{2}y_{1})^{2},}a form of what is now known asBrahmagupta's identity. Using this, he was able to "compose" triples(x1,y1,k1){\displaystyle (x_{1},y_{1},k_{1})}and(x2,y2,k2){\displaystyle (x_{2},y_{2},k_{2})}that were solutions ofx2−Ny2=k{\displaystyle x^{2}-Ny^{2}=k}, to generate the new triples Not only did this give a way to generate infinitely many solutions tox2−Ny2=1{\displaystyle x^{2}-Ny^{2}=1}starting with one solution, but also, by dividing such a composition byk1k2{\displaystyle k_{1}k_{2}}, integer or "nearly integer" solutions could often be obtained. For instance, forN=92{\displaystyle N=92}, Brahmagupta composed the triple (10, 1, 8) (since102−92(12)=8{\displaystyle 10^{2}-92(1^{2})=8}) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" forx{\displaystyle x}andy{\displaystyle y}) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution ofx2−Ny2=k{\displaystyle x^{2}-Ny^{2}=k}fork= ±1, ±2, or ±4.[11] The first general method for solving the Pell's equation (for allN) was given byBhāskara IIin 1150, extending the methods of Brahmagupta. Called thechakravala (cyclic) method, it starts by choosing two relatively prime integersa{\displaystyle a}andb{\displaystyle b}, then composing the triple(a,b,k){\displaystyle (a,b,k)}(that is, one which satisfiesa2−Nb2=k{\displaystyle a^{2}-Nb^{2}=k}) with the trivial triple(m,1,m2−N){\displaystyle (m,1,m^{2}-N)}to get the triple(am+Nb,a+bm,k(m2−N)){\displaystyle {\big (}am+Nb,a+bm,k(m^{2}-N){\big )}}, which can be scaled down to(am+Nbk,a+bmk,m2−Nk).{\displaystyle \left({\frac {am+Nb}{k}},{\frac {a+bm}{k}},{\frac {m^{2}-N}{k}}\right).} Whenm{\displaystyle m}is chosen so thata+bmk{\displaystyle {\frac {a+bm}{k}}}is an integer, so are the other two numbers in the triple. Among suchm{\displaystyle m}, the method chooses one that minimizesm2−Nk{\displaystyle {\frac {m^{2}-N}{k}}}and repeats the process. This method always terminates with a solution. Bhaskara used it to give the solutionx=1766319049,y=226153980to theN= 61 case.[11] Several European mathematicians rediscovered how to solve Pell's equation in the 17th century.Pierre de Fermatfound how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians.[12]In a letter toKenelm Digby,Bernard Frénicle de Bessysaid that Fermat found the smallest solution forNup to 150 and challengedJohn Wallisto solve the casesN= 151 or 313. Both Wallis andWilliam Brounckergave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker.[13] John Pell's connection with the equation is that he revisedThomas Branker's translation[14]ofJohann Rahn's 1659 bookTeutsche Algebra[note 2]into English, with a discussion of Brouncker's solution of the equation.Leonhard Eulermistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell.[4] The general theory of Pell's equation, based oncontinued fractionsand algebraic manipulations with numbers of the formP+Qa,{\displaystyle P+Q{\sqrt {a}},}was developed by Lagrange in 1766–1769.[15]In particular, Lagrange gave a proof that the Brouncker–Wallis algorithm always terminates. Lethi/ki{\displaystyle h_{i}/k_{i}}denote the unique sequence ofconvergentsof theregular continued fractionforn{\displaystyle {\sqrt {n}}}. Then the pair of positive integers(x1,y1){\displaystyle (x_{1},y_{1})}solving Pell's equation and minimizingxsatisfiesx1=hiandy1=kifor somei. This pair is called thefundamental solution. The sequence of integers[a0;a1,a2,…]{\displaystyle [a_{0};a_{1},a_{2},\ldots ]}in the regular continued fraction ofn{\displaystyle {\sqrt {n}}}is always eventually periodic. It can be written in the form[⌊n⌋;a1,a2,…,ar−1,2⌊n⌋¯]{\displaystyle \left[\lfloor {\sqrt {n}}\rfloor ;\;{\overline {a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }}\right]}, where⌊⋅⌋{\displaystyle \lfloor \,\cdot \,\rfloor }denotes integer floor, and the sequencea1,a2,…,ar−1,2⌊n⌋{\displaystyle a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }repeats infinitely. Moreover, the tuple(a1,a2,…,ar−1){\displaystyle (a_{1},a_{2},\ldots ,a_{r-1})}ispalindromic, the same left-to-right or right-to-left.[16] The fundamental solution is(x1,y1)={(hr−1,kr−1),forreven(h2r−1,k2r−1),forrodd{\displaystyle (x_{1},y_{1})={\begin{cases}(h_{r-1},k_{r-1}),&{\text{ for }}r{\text{ even}}\\(h_{2r-1},k_{2r-1}),&{\text{ for }}r{\text{ odd}}\end{cases}}} The computation time for finding the fundamental solution using the continued fraction method, with the aid of theSchönhage–Strassen algorithmfor fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair(x1,y1){\displaystyle (x_{1},y_{1})}. However, this is not apolynomial-time algorithmbecause the number of digits in the solution may be as large as√n, far larger than a polynomial in the number of digits in the input valuen.[17] Once the fundamental solution is found, all remaining solutions may be calculated algebraically from[17]xk+ykn=(x1+y1n)k,{\displaystyle x_{k}+y_{k}{\sqrt {n}}=(x_{1}+y_{1}{\sqrt {n}})^{k},}expanding the right side,equating coefficientsofn{\displaystyle {\sqrt {n}}}on both sides, and equating the other terms on both sides. This yields therecurrence relationsxk+1=x1xk+ny1yk,{\displaystyle x_{k+1}=x_{1}x_{k}+ny_{1}y_{k},}yk+1=x1yk+y1xk.{\displaystyle y_{k+1}=x_{1}y_{k}+y_{1}x_{k}.} Although writing out the fundamental solution (x1,y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the formx1+y1n=∏i=1t(ai+bin)ci{\displaystyle x_{1}+y_{1}{\sqrt {n}}=\prod _{i=1}^{t}\left(a_{i}+b_{i}{\sqrt {n}}\right)^{c_{i}}}using much smaller integersai,bi, andci. For instance,Archimedes' cattle problemis equivalent to the Pell equationx2−410286423278424y2=1{\displaystyle x^{2}-410\,286\,423\,278\,424\ y^{2}=1}, the fundamental solution of which has206545digits if written out explicitly. However, the solution is also equal tox1+y1n=u2329,{\displaystyle x_{1}+y_{1}{\sqrt {n}}=u^{2329},}whereu=x1′+y1′4729494=(300426607914281713365609+841295076778583932587766)2{\displaystyle u=x'_{1}+y'_{1}{\sqrt {4\,729\,494}}=(300\,426\,607\,914\,281\,713\,365\ {\sqrt {609}}+84\,129\,507\,677\,858\,393\,258\ {\sqrt {7766}})^{2}}andx1′{\displaystyle x'_{1}}andy1′{\displaystyle y'_{1}}only have 45 and 41 decimal digits respectively.[17] Methods related to thequadratic sieveapproach forinteger factorizationmay be used to collect relations between prime numbers in the number field generated by√nand to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of thegeneralized Riemann hypothesis, it can be shown to take timeexp⁡O(log⁡N⋅log⁡log⁡N),{\displaystyle \exp O\left({\sqrt {\log N\cdot \log \log N}}\right),}whereN= lognis the input size, similarly to the quadratic sieve.[17] Hallgren showed that aquantum computercan find a product representation, as described above, for the solution to Pell's equation in polynomial time.[18]Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a realquadratic number field, was extended to more general fields by Schmidt and Völlmer.[19] As an example, consider the instance of Pell's equation forn= 7; that is,x2−7y2=1.{\displaystyle x^{2}-7y^{2}=1.}The continued fraction of7{\displaystyle {\sqrt {7}}}has the form[2;1,1,1,4¯]{\displaystyle [2;\ {\overline {1,1,1,4}}]}. Since the period has length4{\displaystyle 4}, which is an even number, the convergent producing the fundamental solution is obtained by truncating the continued fraction right before the end of the first occurrence of the period:[2;1,1,1]=83{\displaystyle [2;\ 1,1,1]={\frac {8}{3}}}. The sequence of convergents for the square root of seven are Applying the recurrence formula to this solution generates the infinite sequence of solutions For the Pell's equationx2−13y2=1,{\displaystyle x^{2}-13y^{2}=1,}the continued fraction13=[3;1,1,1,1,6¯]{\displaystyle {\sqrt {13}}=[3;\ {\overline {1,1,1,1,6}}]}has a period of odd length. For this the fundamental solution is obtained by truncating the continued fraction right before the second occurrence of the period[3;1,1,1,1,6,1,1,1,1]=649180{\displaystyle [3;\ 1,1,1,1,6,1,1,1,1]={\frac {649}{180}}}. Thus, the fundamental solution is(x1,y1)=(649,180){\displaystyle (x_{1},y_{1})=(649,180)}. The smallest solution can be very large. For example, the smallest solution tox2−313y2=1{\displaystyle x^{2}-313y^{2}=1}is (32188120829134849,1819380158564160), and this is the equation which Frenicle challenged Wallis to solve.[20]Values ofnsuch that the smallest solution ofx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}is greater than the smallest solution for any smaller value ofnare (For these records, seeOEIS:A033315forxandOEIS:A033319fory.) The following is a list of the fundamental solution tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}withn≤ 128. Whennis an integer square, there is no solution except for the trivial solution (1, 0). The values ofxare sequenceA002350and those ofyare sequenceA002349inOEIS. Pell's equation has connections to several other important subjects in mathematics. Pell's equation is closely related to the theory ofalgebraic numbers, as the formulax2−ny2=(x+yn)(x−yn){\displaystyle x^{2}-ny^{2}=(x+y{\sqrt {n}})(x-y{\sqrt {n}})}is thenormfor theringZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}and for the closely relatedquadratic fieldQ(n){\displaystyle \mathbb {Q} ({\sqrt {n}})}. Thus, a pair of integers(x,y){\displaystyle (x,y)}solves Pell's equation if and only ifx+yn{\displaystyle x+y{\sqrt {n}}}is aunitwith norm 1 inZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}.[21]Dirichlet's unit theorem, that all units ofZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}can be expressed as powers of a singlefundamental unit(and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution.[22]The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. Demeyer mentions a connection between Pell's equation and theChebyshev polynomials: IfTi(x){\displaystyle T_{i}(x)}andUi(x){\displaystyle U_{i}(x)}are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in anypolynomial ringR[x]{\displaystyle R[x]}, withn=x2−1{\displaystyle n=x^{2}-1}:[23]Ti2−(x2−1)Ui−12=1.{\displaystyle T_{i}^{2}-(x^{2}-1)U_{i-1}^{2}=1.}Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution:Ti+Ui−1x2−1=(x+x2−1)i.{\displaystyle T_{i}+U_{i-1}{\sqrt {x^{2}-1}}=(x+{\sqrt {x^{2}-1}})^{i}.}It may further be observed that if(xi,yi){\displaystyle (x_{i},y_{i})}are the solutions to any integer Pell's equation, thenxi=Ti(x1){\displaystyle x_{i}=T_{i}(x_{1})}andyi=y1Ui−1(x1){\displaystyle y_{i}=y_{1}U_{i-1}(x_{1})}.[24] A general development of solutions of Pell's equationx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}in terms ofcontinued fractionsofn{\displaystyle {\sqrt {n}}}can be presented, as the solutionsxandyare approximates to the square root ofnand thus are a special case of continued fraction approximations forquadratic irrationals.[16] The relationship to the continued fractions implies that the solutions to Pell's equation form asemigroupsubset of themodular group. Thus, for example, ifpandqsatisfy Pell's equation, then(pqnqp){\displaystyle {\begin{pmatrix}p&q\\nq&p\end{pmatrix}}}is a matrix of unitdeterminant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: Ifpk−1/qk−1andpk/qkare two successive convergents of a continued fraction, then the matrix(pk−1pkqk−1qk){\displaystyle {\begin{pmatrix}p_{k-1}&p_{k}\\q_{k-1}&q_{k}\end{pmatrix}}}has determinant (−1)k. Størmer's theoremapplies Pell equations to find pairs of consecutivesmooth numbers, positive integers whose prime factors are all smaller than a given value.[25][26]As part of this theory,Størmeralso investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has aprime factorthat does not dividen.[25] The negative Pell's equation is given byx2−ny2=−1{\displaystyle x^{2}-ny^{2}=-1}and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. A necessary (but not sufficient) condition for solvability is thatnis not divisible by 4 or by a prime of form 4k+ 3.[note 3]Thus, for example,x2− 3y2= −1 is never solvable, butx2− 5y2= −1 may be.[27] The first few numbersnfor whichx2−n y2= −1 is solvable are 1 (with only one trivial solution) and with infinitely many solutions. The solutions of the negative Pell's equation for1≤n≤298{\displaystyle 1\leq n\leq 298}are: Letα=Πjis odd(1−2j){\displaystyle \alpha =\Pi _{j{\text{ is odd}}}(1-2^{j})}. The proportion of square-freendivisible bykprimes of the form 4m+ 1 for which the negative Pell's equation is solvable is at leastα.[28]When the number of prime divisors is not fixed, the proportion is given by 1 −α.[29][30] If the negative Pell's equation does have a solution for a particularn, its fundamental solution leads to the fundamental one for the positive case by squaring both sides of the defining equation:(x2−ny2)2=(−1)2{\displaystyle (x^{2}-ny^{2})^{2}=(-1)^{2}}implies>(x2+ny2)2−n(2xy)2=1.{\displaystyle >(x^{2}+ny^{2})^{2}-n(2xy)^{2}=1.} As stated above, if the negative Pell's equation is solvable, a solution can be found using the method of continued fractions as in the positive Pell's equation. The recursion relation works slightly differently however. Since(x+yn)(x−yn)=−1{\displaystyle (x+y{\sqrt {n}})(x-y{\sqrt {n}})=-1}, the next solution is determined in terms ofi(xk+ykn)=(i(x+yn))k{\displaystyle i(x_{k}+y_{k}{\sqrt {n}})=(i(x+y{\sqrt {n}}))^{k}}whenever there is a match, that is, whenk{\displaystyle k}is odd. The resulting recursion relation is (modulo a minus sign, which is immaterial due to the quadratic nature of the equation)xk=xk−2x12+nxk−2y12+2nyk−2y1x1,{\displaystyle x_{k}=x_{k-2}x_{1}^{2}+nx_{k-2}y_{1}^{2}+2ny_{k-2}y_{1}x_{1},}yk=yk−2x12+nyk−2y12+2xk−2y1x1,{\displaystyle y_{k}=y_{k-2}x_{1}^{2}+ny_{k-2}y_{1}^{2}+2x_{k-2}y_{1}x_{1},}which gives an infinite tower of solutions to the negative Pell's equation (except forn=1{\displaystyle n=1}). The equationx2−ny2=N{\displaystyle x^{2}-ny^{2}=N}is called thegeneralized[31][32](orgeneral[16])Pell's equation. The equationu2−nv2=1{\displaystyle u^{2}-nv^{2}=1}is the correspondingPell's resolvent.[16]A recursive algorithm was given by Lagrange in 1768 for solving the equation, reducing the problem to the case|N|<n{\displaystyle |N|<{\sqrt {n}}}.[33][34]Such solutions can be derived using the continued-fractions method as outlined above. If(x0,y0){\displaystyle (x_{0},y_{0})}is a solution tox2−ny2=N,{\displaystyle x^{2}-ny^{2}=N,}and(uk,vk){\displaystyle (u_{k},v_{k})}is a solution tou2−nv2=1,{\displaystyle u^{2}-nv^{2}=1,}then(xk,yk){\displaystyle (x_{k},y_{k})}such thatxk+ykn=(x0+y0n)(uk+vkn){\displaystyle x_{k}+y_{k}{\sqrt {n}}={\big (}x_{0}+y_{0}{\sqrt {n}}{\big )}{\big (}u_{k}+v_{k}{\sqrt {n}}{\big )}}is a solution tox2−ny2=N{\displaystyle x^{2}-ny^{2}=N}, a principle named themultiplicative principle.[16]The solution(xk,yk){\displaystyle (x_{k},y_{k})}is called aPell multipleof the solution(x0,y0){\displaystyle (x_{0},y_{0})}. There exists a finite set of solutions tox2−ny2=N{\displaystyle x^{2}-ny^{2}=N}such that every solution is a Pell multiple of a solution from that set. In particular, if(u,v){\displaystyle (u,v)}is the fundamental solution tou2−nv2=1{\displaystyle u^{2}-nv^{2}=1}, then each solution to the equation is a Pell multiple of a solution(x,y){\displaystyle (x,y)}with|x|≤12|N|(|U|+1){\displaystyle |x|\leq {\tfrac {1}{2}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)}and|y|≤12n|N|(|U|+1){\displaystyle |y|\leq {\tfrac {1}{2{\sqrt {n}}}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)}, whereU=u+vn{\displaystyle U=u+v{\sqrt {n}}}.[35] Ifxandyare positive integer solutions to the Pell's equation with|N|<n{\displaystyle |N|<{\sqrt {n}}}, thenx/y{\displaystyle x/y}is a convergent to the continued fraction ofn{\displaystyle {\sqrt {n}}}.[35] Solutions to the generalized Pell's equation are used for solving certainDiophantine equationsandunitsof certainrings,[36][37]and they arise in the study ofSIC-POVMsinquantum information theory.[38] The equationx2−ny2=4{\displaystyle x^{2}-ny^{2}=4}is similar to the resolventx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}in that if a minimal solution tox2−ny2=4{\displaystyle x^{2}-ny^{2}=4}can be found, then all solutions of the equation can be generated in a similar manner to the caseN=1{\displaystyle N=1}. For certainn{\displaystyle n}, solutions tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}can be generated from those withx2−ny2=4{\displaystyle x^{2}-ny^{2}=4}, in that ifn≡5(mod8),{\displaystyle n\equiv 5{\pmod {8}},}then every third solution tox2−ny2=4{\displaystyle x^{2}-ny^{2}=4}hasx,y{\displaystyle x,y}even, generating a solution tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}.[16]
https://en.wikipedia.org/wiki/Pell%27s_equation
Themathematicaldisciplines ofcombinatoricsanddynamical systemsinteract in a number of ways. Theergodic theoryof dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field ofarithmetic combinatorics. Alsodynamical systems theoryis heavily involved in the relatively recent field ofcombinatorics on words. Also combinatorial aspects of dynamical systems are studied. Dynamical systems can be defined on combinatorial objects; see for examplegraph dynamical system. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Combinatorics_and_dynamical_systems
Adisplay deviceis anoutput devicefor presentation ofinformationinvisual[1]ortactileform (the latter used for example intactile electronic displaysfor blind people).[2]When the input information that is supplied has an electrical signal the display is called anelectronic display. Common applications forelectronic visual displaysaretelevision setsorcomputer monitors. These are the technologies used to create the various displays in use today. Some displays can show onlydigitsoralphanumericcharacters. They are calledsegment displays, because they are composed of several segments that switch on and off to give appearance of desiredglyph. The segments are usually singleLEDsorliquid crystals. They are mostly used indigital watchesandpocket calculators. Common types areseven-segment displayswhich are used for numerals only, and alphanumericfourteen-segment displaysandsixteen-segment displayswhich can display numerals and Roman alphabet letters. Cathode-ray tubeswere also formerly widely used. 2-dimensionaldisplays that cover a full area (usually arectangle) are also calledvideo displays, since it is the main modality of presentingvideo. Full-area 2-dimensional displays are used in, for example: Underlying technologies for full-area 2-dimensional displays include: Themultiplexed displaytechnique is used to drive most display devices.
https://en.wikipedia.org/wiki/Display_device
Glitch removalis the elimination ofglitches—unnecessary signal transitions without functionality—from electronic circuits.Power dissipationof a gate occurs in two ways: static power dissipation and dynamic power dissipation. Glitch power comes under dynamic dissipation in the circuit and is directly proportional to switching activity. Glitch power dissipation is 20%–70% of total power dissipation and hence glitching should be eliminated for low power design. Switching activity occurs due tosignal transitionswhich are of two types: functional transition and aglitch.Switching powerdissipation is directly proportional to the switching activity (α), loadcapacitance(C), Supply voltage (V), andclock frequency(f) as: Switching activity means transition to different levels. Glitches are dependent on signal transitions and more glitches results in higher power dissipation. As per above equation switching power dissipation can be controlled by controlling switching activity (α),voltage scalingetc. As discussed, more transition results in more glitches and hence more power dissipation. To minimize glitch occurrence, switching activity should be minimized. For example,Gray codecould be used in counters instead ofbinary code, since every increment in Gray code only flips one bit. Gate freezing minimizes power dissipation by eliminating glitching. It relies on the availability of modifiedstandard library cellssuch as the so-calledF-Gate. This method consists of transforming high glitch gates into modified devices which filter out the glitches when a control signal is applied. When the control signal is high, the F-Gate operates as normal but when the control signal is low, the gate output is disconnected from the ground. As a result, it can never be discharged to logic 0 and glitches are prevented. Hazardsin digital circuits are unnecessary transitions due to varying path delays in the circuit. Balanced path delay techniques can be used for resolving differing path delays. To make path delays equal, buffer insertion is done on the faster paths. Balanced path delay will avoid glitches in the output. Hazard filtering is another way to remove glitching. In hazard filteringgatepropagation delays are adjusted. This results in balancing all path delays at the output. Hazard filtering is preferred over path balancing as path balancing consumes more power due to the insertion of additional buffers. Gate upsizing and gate downsizing techniques are used for path balancing. A gate is replaced by a logically equivalent but differently-sized cell so that delay of the gate is changed. Because increasing the gate size also increases power dissipation, gate-upsizing is only used when power saved by glitch removal is more than the power dissipation due to the increase in size. Gate sizing affects glitching transitions but does not affect the functional transition. The delay of a gate is a function of itsthreshold voltage. Non-critical paths are selected and threshold voltage of the gates in these paths is increased. This results in balanced propagation delay along different paths converging at the receiving gate. Performance is maintained since it is determined by the time required by the critical path. A higher threshold voltage also reduces theleakage currentof a path.
https://en.wikipedia.org/wiki/Glitch_removal
Telehealthis the distribution ofhealth-related servicesand information via electronic information andtelecommunication technologies.[1]It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions.[2][3] Telemedicineis sometimes used as asynonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. When rural settings, lack of transport, a lack of mobility, conditions due to outbreaks, epidemics or pandemics, decreased funding, or a lack of staff restrict access to care, telehealth may bridge the gap[4]and can even improve retention in treatment[5]as well as provide distance-learning; meetings, supervision, and presentations between practitioners; online information andhealth datamanagement and healthcare system integration.[6]Telehealth could include twocliniciansdiscussing a case overvideo conference; a robotic surgery occurring through remote access; physical therapy done via digital monitoring instruments, live feed and application combinations; tests being forwarded between facilities for interpretation by a higher specialist; home monitoring through continuous sending of patient health data; client to practitioner online conference; or even videophone interpretation during a consult.[1][2][6] Telehealth is sometimes discussed interchangeably with telemedicine, the latter being more common than the former. TheHealth Resources and Services Administrationdistinguishes telehealth from telemedicine in its scope, defining telemedicine only as describing remote clinical services, such as diagnosis and monitoring, while telehealth includespreventative, promotive, and curative care delivery.[1]This includes the above-mentioned non-clinical applications, like administration and provider education.[2][3] TheUnited States Department of Health and Human Servicesstates that the term telehealth includes "non-clinical services, such as provider training, administrative meetings, and continuing medical education", and that the term telemedicine means "remote clinical services".[7]TheWorld Health Organizationuses telemedicine to describe all aspects of health care including preventive care.[8]TheAmerican Telemedicine Associationuses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.[9] eHealthis another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth,electronic medical records, and other components ofhealth information technology.[10] Telehealth requires good Internet access by participants, usually in the form of a strong, reliablebroadbandconnection, and broadband mobile communication technology of at least the fourth generation (4G) or long-term evolution (LTE) standard to overcome issues with video stability and bandwidth restrictions.[11][12][13]As broadband infrastructure has improved, telehealth usage has become more widely feasible.[1][2] Healthcare providersoften begin telehealth with aneeds assessmentwhich assesses hardships which can be improved by telehealth such as travel time, costs or time off work.[1][2]Collaborators such astechnology companiescan ease the transition.[1] Delivery can come within four distinct domains:live video (synchronous),store-and-forward (asynchronous),remote patient monitoring, andmobile health.[14]Audio-based telemedicine, primarily through telephone consultations, has been studied as a tool for managing chronic conditions. A systematic review of 40 randomized controlled trials found that audio-based care was generally comparable to in-person or video care, though with low to very low certainty of evidence.[15] Store-and-forwardtelemedicine involves acquiring medical data (likemedical images,biosignalsetc.) and then transmitting this data to a doctor or medical specialist at a convenient time for assessmentoffline.[9]It does not require the presence of both parties at the same time.[16]Dermatology(cf:teledermatology),radiology, andpathologyare common specialties that are conducive to asynchronous telemedicine. A properly structuredmedical record, preferably inelectronicform, should be a component of this transfer. The 'store-and-forward' process requires the clinician to rely on a history report and audio/video information in lieu of a physical examination.[9] Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective.[17]Examples include home-based nocturnaldialysis[18]and improved joint management.[19] Electronic consultationsare possible through interactive telemedicine services which provide real-time interactions between patient and provider.[16]Videoconferencinghas been used in a wide range of clinical disciplines and settings for various purposes, including management, diagnosis, counseling, and monitoring of patients.[20] Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations for communication between people in real time.[21] At the dawn of the technology, videotelephony also includedimage phoneswhich would exchange still images between units every few seconds over conventionalPOTS-type telephone lines, essentially the same asslow scan TVsystems.[citation needed] Currently, videotelephony is particularly useful to thedeafandspeech-impairedwho can use them withsign languageand also with avideo relay service, and well as to those withmobility issuesor those who are located in distant places and are in need oftelemedicalortele-educationalservices.[22][23] Common daily emergency telemedicine is performed by SAMU Regulator Physicians inFrance,Spain,Chile, andBrazil.Aircraftandmaritimeemergencies are also handled by SAMU centres in Paris, Lisbon and Toulouse.[24] A recent study identified three major barriers to the adoption of telemedicine in emergency and critical care units. They include: Emergency telehealth is also gaining acceptance in theUnited States. There are several modalities currently being practiced that include but are not limited to TeleTriage, TeleMSE, and ePPE. An example of telehealth in the field is when EMS arrives on scene of an incident and is able to take anEKGthat is then sent directly to a physician at the hospital to be read, allowing for instant care and management.[26] Telenursing refers to the use oftelecommunicationsandinformation technologyin order to providenursingservices in health care whenever a large physical distance exists between patient and nurse, or between any number of nurses. As a field, it is part of telehealth, and has many points of contact with other medical and non-medical applications, such astelediagnosis, teleconsultation, telemonitoring, etc. Telenursing is achieving significant growth rates in many countries due to several factors: the preoccupation with reducing the costs of health care, an increase in theagingand chronically ill population, and the increase in coverage of health care to distant, rural, small or sparsely populated regions. Among its benefits, telenursing may help solve increasing shortages of nurses, reduce distances and travel time, and keep patients out of hospital. A greater degree of job satisfaction has been registered among telenurses.[27] InAustralia, during January 2014,Melbournetech startupSmall World Socialcollaborated with theAustralian Breastfeeding Associationto create the first hands-free breastfeedingGoogle Glassapplication for new mothers.[28]The application, namedGoogle Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture, etc.) or call a lactation consultant via a secure Google Hangout,[29]who can view the issue through the mother's Google Glass camera.[30]The trial was successfully concluded inMelbournein April 2014, and 100% of participants were breastfeeding confidently.[31][32][33][34] Palliative careis aninterdisciplinarymedicalcaregivingapproach aimed at optimizingquality of lifeand mitigatingsufferingamong people with serious, complex, and oftenterminalillnesses. In the past, palliative care was adiseasespecific approach, but today theWorld Health Organization(WHO) takes a broader approach suggesting that palliative care should be applied as early as possible to any chronic and fatal illness. As in many aspects ofhealth care, telehealth is increasingly being used in palliative care[35]and is often referred to as telepalliative care.[36]The types oftechnologyapplied in telepalliative care are typicallytelecommunicationtechnologies, such asvideo conferencingormessagingfor follow-up, or digitalsymptomassessments through digitalquestionnairesgeneratingalertstohealth care professionals.[37]Telepalliative care has been shown to be a feasible approach to deliver palliative care amongpatients,caregiversand health care professionals.[38][37][39]Telepalliative care can provide an added support system that enable patients to remain at home through self-reporting of symptoms and tailoring care to specific patients.[39]Studies have shown that the use of telehealth in palliative care is mostly well received by patients, and that telepalliative care may improve access tohealth care professionalsat home and enhance feelings of security and safety among patients receiving palliative care.[38]Further, telepalliative care may enable more efficientutilizationof healthcareresources, promotescollaborationbetween different levels of healthcare, and makes healthcare professionals more responsive to changes in patients' condition.[37] Challenging aspects of the use of telehealth in palliative care have also been described. Generally, palliative care is a diversemedicalspecialty, involvinginterdisciplinaryprofessionalsfrom different professionaltraditionsandcultures, delivering care to aheterogenouscohortof patients with diverse diseases, conditions and symptoms. This makes it a challenge to develop telehealth that is suitable for all patients and in all contexts of palliative care. Some of the barriers to telepalliative care relate to inflexible reporting of complex and fluctuating symptoms and circumstances using electronic questionnaires.[39]Further, palliative care emphasizes aholisticapproach that should addressexistential, spiritual andmentaldistress related to serious illness.[40]However, few studies have included the self-reporting of existential or spiritual concerns,emotions, andwell-being.[39]Healthcare professionals may also beuncomfortableproviding emotional orpsychologicalcare remotely.[37]Palliative care has been characterized as high-touch rather than high-tech, limiting the interest in applying technological advancements when developing interventions.[41]To optimize the advantages and minimize the challenges with the use of telehealth inhome-basedpalliative care,futureresearchshould include users in thedesignand developmentprocess. Understanding the potential of telehealth to supporttherapeuticrelationships between patients and health care professionals and being aware of the possible difficulties and tensions it may create are critical to itssuccessfulandacceptableuse.[37][39] Telepharmacy is the delivery ofpharmaceutical careviatelecommunicationsto patients in locations where they may not have direct contact with apharmacist. It is an instance of the wider phenomenon of telemedicine, as implemented in the field ofpharmacy. Telepharmacy services includedrug therapymonitoring, patient counseling, prior authorization and refill authorization forprescription drugs, and monitoring offormularycompliance with the aid ofteleconferencingorvideoconferencing.Remote dispensingof medications by automated packaging and labeling systems can also be thought of as an instance of telepharmacy. Telepharmacy services can be delivered at retail pharmacy sites or throughhospitals,nursing homes, or other medical care facilities. This approach allows patients in remote or underserved areas to receive pharmacy services that would otherwise be unavailable to them, enhancing access to care and ensuring continuity in medication management.[42]Health outcomes appear similar when pharmacy services are delivered by telepharmacy compared to traditional service delivery.[43] The term can also refer to the use of videoconferencing in pharmacy for other purposes, such as providing education, training, and management services to pharmacists and pharmacy staff remotely.[44] Telepsychiatryor telemental health refers to the use oftelecommunicationstechnology (mostlyvideoconferencingand phone calls) to deliverpsychiatric careremotely for people withmental health conditions. It is a branch of telemedicine.[45][46] Telepsychiatry can be effective in treating people with mental health conditions. In the short-term it can be as acceptable and effective as face-to-face care.[47]Research also suggests comparable therapeutic factors, such as changes in problematic thinking or behaviour.[48] It can improve access to mental health services for some but might also represent a barrier for those lacking access to a suitable device, the internet or the necessarydigital skills. Factors such aspovertythat are associated with lack of internet access are also associated with greater risk of mental health problems, makingdigital exclusionan important problem of telemental health services.[47] Teledentistry is the use ofinformation technologyandtelecommunicationsfor dental care, consultation, education, and public awareness in the same manner as telehealth and telemedicine. Tele-audiology (or teleaudiology) is the utilization of telehealth to provideaudiologicalservices and may include the full scope of audiological practice. This term was first used by Gregg Givens in 1999 in reference to a system being developed atEast Carolina Universityin North Carolina, US.[50] Teleneurology describes the use ofmobile technologyto provide neurological care remotely, including care for stroke, movement disorders like Parkinson's disease, seizure disorders (e.g., epilepsy), etc. The use of teleneurology gives us the opportunity to improve health care access for billions around the globe, from those living in urban locations to those in remote, rural locations. Evidence shows that individuals with Parkinson's disease prefer personal connection with a remote specialist to their local clinician. Such home care is convenient but requires access to and familiarity with the Internet.[51][52]A 2017 randomized controlled trial of "virtual house calls" or video visits with individuals diagnosed with Parkinson's disease evidences patient preference for the remote specialist vs their local clinician after one year.[52]Teleneurology for patients with Parkison's disease is found to be cheaper than in person visits by reducing transportation and travel time[53][54]A recent systematic review by Ray Dorsey et al.[51]describes both the limitations and potential benefits of teleneurology in improving care for patients with chronic neurological conditions, especially in low-income countries. White, well-educated and technologically savvy people are the biggest consumers of telehealth services for Parkinson's disease.[53][54]as compared to ethnic minorities in the US.[54] Telemedicine in neurosurgery was historically primarily used for follow-up visits by patients who had to travel far to undergo surgery.[55]In the last decade, telemedicine was also used for remote ICU rounding as well as prompt evaluation for acute ischemic stroke and administration of IV alteplase in conjunction with neurology.[56][57]From the onset of the COVID-19 pandemic, there was a rapid surge in the use of telemedicine across all divisions of neurosurgery: vascular, oncology, spine, and functional neurosurgery. Not only for follow-up visits, but it has gained popularity for seeing new patients or following established patients regardless of whether they underwent surgery.[58][59]Telemedicine is not limited to direct patient care only; there are a number of new research groups and companies focused on using telemedicine for clinical trials involving patients with neurosurgical diagnoses. Teleneuropsychology is the use of telehealth/videoconference technology for the remote administration ofneuropsychological tests. Neuropsychological tests are used to evaluate the cognitive status of individuals with known or suspectedbrain disordersand provide a profile of cognitive strengths and weaknesses. Through a series of studies, there is growing support in the literature showing that remote videoconference-based administration of many standard neuropsychological tests results in test findings similar to traditional in-person evaluations, thereby establishing the basis for the reliability and validity of teleneuropsychological assessment.[60][61][62][63][64][65][66] Telenutrition refers to the use of video conferencing/ telephony to provide online consultation by anutritionistordietician. Patient or clients upload their vital statistics, diet logs, food pictures, etc., on a telenutrition portal that is then used by the nutritionist or dietician to analyze their current health condition. The nutritionist or dietician can then set goals for their respective clients/patients and monitor their progress regularly by follow-up consultations. Telenutrition portals can help people seek remote consultation for themselves and/or their family. This can be extremely helpful for elderly or bedridden patients who can consult their dietician from comfort of their homes. Telenutrition showed to be feasible, and the majority of patients trusted the nutritional televisits, in place of the scheduled but not provided follow-up visits during the lockdown of the COVID-19 pandemic.[67] Telerehabilitation (ore-rehabilitation[68][69]) is the delivery ofrehabilitationservices overtelecommunication networksand the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment) andclinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are:neuropsychology,speech–language pathology,audiology,occupational therapy, andphysical therapy. Telerehabilitation can deliver therapy to people who cannot travel to aclinicbecause the patient has adisabilityor because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in clinical consultation at a distance. Most telerehabilitation is highly visual. As of 2014, the most commonly used mediums arewebcams,videoconferencing,phone lines,videophones, and webpages containingrich web applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used forneuropsychological rehabilitation, fitting of rehabilitation equipment such aswheelchairs,braces, orartificial limbs, and in speech-language pathology.Rich web applicationsfor neuropsychological rehabilitation (cognitive rehabilitation) of cognitive impairment (from many etiologies) were first introduced in 2001. This endeavor has expanded as ateletherapyapplication for cognitive skills enhancement programs for school children.Tele-audiology(hearing assessments) is a growing application. Physical therapy and psychology interventions delivered via telehealth may result in similar outcomes as those delivered in person for a range of health conditions.[70] Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research intelehaptics(the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice in the future. In the United States, theNational Institute on Disability and Rehabilitation Research's (NIDRR)[71]supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at theUniversity of Pittsburgh, theRehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital inWashington, D.C. Other federal funders of research are theVeterans Health Administration, the Health Services Research Administration in the US Department of Health and Human Services, and theDepartment of Defense.[72]Outside the United States, excellent research is conducted inAustraliaandEurope. Only a fewhealth insurersin the United States, and about half ofMedicaidprograms,[73]reimbursefor telerehabilitation services. If the research shows that teleassessments and teletherapy are equivalent to clinical encounters, it is more likely thatinsurersandMedicarewill cover telerehabilitation services. InIndia, the Indian Association of Chartered Physiotherapists (IACP) provides telerehabilitation facilities. With the support and collaboration of local clinics and private practitioners and the Members IACP, IACP runs the facility, named Telemedicine. IACP has maintained an internet-based list of their members on their website, through which patients can make online appointments. Telemedicine can be utilized to improve the efficiency and effectiveness of care delivery in a trauma environment. Examples include: Telemedicine for trauma triage: using telemedicine, trauma specialists can interact with personnel on the scene of a mass casualty or disaster situation via the internet using mobile devices to determine the severity of injuries. They can provide clinical assessments and determine whether those injured must be evacuated for necessary care. Remote trauma specialists can provide the same quality of clinical assessment and plan of care as a trauma specialist located physically with the patient.[74] Telemedicine forintensive care unit(ICU) rounds: Telemedicine is also being used in some trauma ICUs to reduce the spread of infections. Rounds are usually conducted at hospitals across the country by a team of approximately ten or more people including attending physicians, fellows, residents, and other clinicians. This group usually moves from bed to bed in a unit, discussing each patient. This aids in the transition of care for patients from the night shift to the morning shift but also serves as an educational experience for new residents to the team. A new approach features the team conducting rounds from a conference room using a video-conferencing system. The trauma attending, residents, fellows, nurses, nurse practitioners, and pharmacists are able to watch a live video stream from the patient's bedside. They can see the vital signs on the monitor, view the settings on the respiratory ventilator, and/or view the patient's wounds. Video-conferencing allows remote viewers to conduct two-way communication with clinicians at the bedside.[75] Telemedicine for trauma education: some trauma centers are delivering trauma education lectures to hospitals and health care providers worldwide using video conferencing technology. Each lecture provides fundamental principles, first-hand knowledge, and evidenced-based methods for critical analysis of established clinical practice standards, and comparisons to newer advanced alternatives. The various sites collaborate and share their perspective based on location, available staff, and available resources.[76] Telemedicine in the trauma operating room: trauma surgeons are able to observe and consult on cases from a remote location using video conferencing. This capability allows the attending to view the residents in real time. The remote surgeon has the capability to control the camera (pan, tilt, and zoom) to get the best angle of the procedure while at the same time providing expertise in order to provide the best possible care to the patient.[77] ECGs, orelectrocardiographs, can be transmitted using telephone and wireless.Willem Einthoven, the inventor of the ECG, actually did tests with the transmission of ECG via telephone lines. This was because the hospital did not allow him to move patients outside the hospital to his laboratory for testing of his new device. In 1906, Einthoven came up with a way to transmit the data from the hospital directly to his lab.[78][79] One of the oldest known telecardiology systems for teletransmissions of ECGs was established in Gwalior, India, in 1975 at GR Medical College by Ajai Shanker, S. Makhija, P.K. Mantri using an indigenous technique for the first time in India. This system enabled wireless transmission of ECG from the moving ICU van or the patients home to the central station in ICU of the department of Medicine. Transmission using wireless was done using frequency modulation which eliminated noise. Transmission was also done through telephone lines. The ECG output was connected to the telephone input using a modulator that converted ECG into high-frequency sound. At the other end a demodulator reconverted the sound into ECG with a good gain accuracy. The ECG was converted to sound waves with a frequency varying from 500 Hz to 2500 Hz with 1500 Hz at baseline. This system was also used to monitor patients with pacemakers in remote areas. The central control unit at the ICU was able to correctly interpretarrhythmia. This technique helped medical aid reach in remote areas.[80] In addition,electronic stethoscopescan be used as recording devices, which is helpful for purposes of telecardiology. There are many examples of successful telecardiology services worldwide. InPakistan, three pilot projects in telemedicine were initiated by the Ministry of IT & Telecom, Government of Pakistan (MoIT) through the Electronic Government Directorate in collaboration with Oratier Technologies (a pioneer company within Pakistan dealing with healthcare and HMIS) and PakDataCom (a bandwidth provider). Three hub stations through were linked via the Pak Sat-I communications satellite, and four districts were linked with another hub. A 312 Kb link was also established with remote sites and 1 Mbit/s bandwidth was provided at each hub. Three hubs were established: the Mayo Hospital (the largest hospital in Asia), JPMC Karachi, and Holy Family Rawalpindi. These 12 remote sites were connected and an average of 1,500 patients were treated per month per hub. The project was still running smoothly after two years.[81] Wireless ambulatory ECG technology, moving beyond previous ambulatory ECG technology such as theHolter monitor, now includes smartphones andApple Watches, which can perform at-home cardiac monitoring and send the data to a physician via the Internet.[82] Teleradiology is the ability to sendradiographicimages (X-rays, CT, MR, PET/CT, SPECT/CT, MG, US...) from one location to another.[83]For this process to be implemented, three essential components are required: an image-sending station, a transmission network, and a receiving-image review station. The most typical implementation is two computers connected via the Internet. The computer at the receiving end will need a high-quality display screen that has been tested and cleared for clinical purposes. Sometimes the receiving computer will have a printer for convenience. The teleradiology process begins at the image-sending station. The radiographic image and a modem or other connection are required for this first step. The image is scanned and then sent via the network connection to the receiving computer. Today's high-speed broadband-based Internet enables the use of new technologies for teleradiology: the image reviewer can now have access to distant servers in order to view an exam. Therefore, they do not need particular workstations to view the images; a standardpersonal computer(PC) anddigital subscriber line(DSL) connection is enough to reach Keosys' central server. No particular software is necessary on the PC, and the images can be reached from anywhere in the world. Teleradiology is the most popular use for telemedicine and accounts for at least 50% of all telemedicine usage. Telepathology is the practice ofpathologyat a distance. It usestelecommunications technologyto facilitate the transfer of image-rich pathology data between distant locations for the purposes ofdiagnosis,education, andresearch.[84][85]The performance of telepathology requires that a pathologist selects thevideoimages for analysis and rendering diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement in the selection of microscopic fields of view for analysis and diagnosis. A pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In an editorial in a medical journal, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services.[86]He and his collaborators published the first scientific paper on robotic telepathology.[87]Weinstein was also granted the first U.S.patentsforrobotictelepathology systems and telepathology diagnostic networks.[88]Weinstein is known to many as the "father of telepathology".[89]InNorway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989.[90]This is still in operation, decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia. Telepathology has been successfully used for many applications, including the renderinghistopathologytissue diagnoses at a distance, for education and research. Althoughdigital pathologyimaging, includingvirtual microscopy, is the mode of choice for telepathology services in developed countries,analogtelepathology imaging is still used for patient services in somedeveloping countries. Teledermatology allowsdermatologyconsultations over a distance using audio, visual anddata communication, and has been found to improve efficiency, access to specialty care, and patient satisfaction.[91][92]Applications comprise health care management such as diagnoses, consultation and treatment as well as (continuing medical) education.[93][94][95]The dermatologists Perednia and Brown were the first to coin the termteledermatologyin 1995, where they described the value of a teledermatologic service in a rural area underserved by dermatologists.[96] Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Today, applications of teleophthalmology encompass access to eye specialists for patients in remote areas, ophthalmic disease screening, diagnosis and monitoring; as well as distant learning. Teleophthalmology may help reduce disparities by providing remote, low-cost screening tests such as diabetic retinopathy screening to low-income and uninsured patients.[97][98]In Mizoram, India, a hilly area with poor roads, between 2011 and 2015, teleophthalmology provided care to over 10,000 patients. These patients were examined by ophthalmic assistants locally but surgery was done on appointment after the patient images were viewed online by eye surgeons in the hospital 6–12 hours away. Instead of an average five trips for say, a cataract procedure, only one was required for surgery alone as even post-op care like removal of stitches and appointments for glasses was done locally. There were large cost savings in travel as well.[99] In the United States, some companies allow patients to complete an online visual exam and within 24 hours receive a prescription from an optometrist valid for eyeglasses, contact lenses, or both. Some US states such as Indiana have attempted to ban these companies from doing business.[100] Remote surgery(also known as telesurgery) is the ability for a doctor to performsurgeryon a patient even though they are not physically in the same location. It is a form oftelepresence. Remote surgery combines elements ofrobotics, cutting-edgetelecommunicationssuch as high-speed data connections,telehapticsand elements ofmanagement information systems. While the field ofrobotic surgeryis fairly well established, most of these robots are controlled by surgeons at the location of the surgery. Remote surgery isremote workfor surgeons, where the physical distance between the surgeon and the patient is immaterial. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital.[101] Remote surgery or telesurgery is performance of surgical procedures where the surgeon is not physically in the same location as the patient, using a roboticteleoperatorsystem controlled by the surgeon. The remote operator may give tactile feedback to the user. Remote surgery combines elements of robotics and high-speed data connections. A critical limiting factor is the speed,latencyand reliability of the communication system between the surgeon and the patient, though trans-Atlantic surgeries have been demonstrated. Telemedicine has been used globally to increase access to abortion care, specificallymedical abortion, in environments where few abortion care providers exist or abortion is legally restricted. Clinicians are able to virtually provide counseling, review screening tests, observe the administration of an abortion medication, and directly mail abortion pills to people.[102]In 2004,Women on Web(WoW), Amsterdam, started offering online consultations, mostly to people living in areas where abortion was legally restricted, informing them how to safely use medical abortion drugs to end a pregnancy.[102]People contact the Women on Web service online; physicians review any necessary lab results or ultrasounds, mailmifepristoneandmisoprostolpills to people, then follow up through online communication.[103]In the United States,medical abortionwas introduced as a telehealth service in Iowa by Planned Parenthood of the Heartland in 2008 to allow a patient at one health facility to communicate via secure video with a health provider at another facility.[104]In this model a person seeking abortion care must come to a health facility. An abortion care provider communicates with the person located at another site using clinic-to-clinic videoconferencing to provide medical abortion after screening tests and consultation with clinic staff. In 2018, the websiteAid Accesswas launched by the founder of Women on Web,Rebecca Gomperts. It offers a similar service as Women on Web in the United States, but the medications are prescribed to an Indian pharmacy, then mailed to the United States. The TelAbortion study conducted by Gynuity Health Projects, with special approval from the U.S. Food and Drug Administration (FDA), aims to increase access to medical abortion care without requiring an in-person visit to a clinic.[105][106][104]This models was expanded during theCOVID-19 pandemicand as of March 2020 exists in 13 U.S. states and has enrolled over 730 people in the study.[107][106]The person receives counseling and instruction from an abortion care provider via videoconference from a location of their choice. The medications necessary for the abortion, mifepristone and misoprostol, are mailed directly to the person and they have a follow-up video consultation in 7–14 days. A systematic review of telemedicine abortion has found the practice to be safe, effective, efficient, and satisfactory.[102] In the United States, eighteen states require the clinician to be physically present during the administration of medications for abortion which effectively bans telehealth of medication abortion: five states explicitly ban telemedicine for medication abortion, while thirteen states require the prescriber (usually required to be a physician) to be physically present with the patient.[108][109]In the UK, the Royal College of Obstetricians and Gynecologists approved a no-test protocol for medication abortion, with mifepristone available through a minimal-contact pick-up or by mail.[110] Telemedicine can facilitate specialty care delivered byprimary care physiciansaccording to a controlled study of the treatment ofhepatitis C.[111]Various specialties are contributing to telemedicine, in varying degrees.Other specialist conditions for which telemedicine has been used include perinatal mental health.[112] In light of the COVID-19 pandemic, primary care physicians have relied on telehealth to continue to provide care in outpatient settings.[113]The transition to virtual health has been beneficial in providing patients access to care (especially care that does not require a physical exam e.g. medication changes, minor health updates) and avoid putting patients at risk of COVID-19. This included providing services to pediatric patients during the pandemic, where issues of last minute cancelation and rescheduling were frequently related to a lack of technicality and engagement, two factors often understudied in the literature.[114] Telemedicine has also been beneficial in facilitating medical education to students while still allowing for adequate social distancing during the COVID-19 pandemic. Many medical schools have shifted to alternate forms of virtual curriculum and are still able to engage in meaningful telehealth encounters with patients.[115][116] Medication assisted treatment (MAT) is the treatment ofopioid use disorder(OUD) with medications, often in combination with behavioral therapy[117]As a response to the COVID-19 pandemic the use of telemedicine has been granted by theDrug Enforcement Administrationto start or maintain people OUD onbuprenorphine(trade name Suboxone) viatelemedicinewithout the need for an initial in-person examination.[118]On March 31, 2020,QuickMDbecame the first nationalTeleMATservice in the United States to provide Medication-assisted Treatment with Suboxone online – without the need of an in-person visit; with others announcing to follow soon.[119] Telehealth is a modern form of health care delivery. Telehealth breaks away from traditional health care delivery by using modern telecommunication systems including wireless communication methods.[120][121]Traditional health is legislated through policy to ensure the safety of medical practitioners and patients. Consequently, since telehealth is a new form of health care delivery that is now gathering momentum in the health sector, many organizations have started to legislate the use of telehealth into policy.[121]In New Zealand, the Medical Council has a statement about telehealth on their website. This illustrates that the medical council has foreseen the importance that telehealth will have on the health system and have started to introduce telehealth legislation to practitioners along with government.[122] Traditional use of telehealth services has been for specialist treatment. However, there has been a paradigm shift and telehealth is no longer considered a specialist service.[123]This development has ensured that many access barriers are eliminated, as medical professionals and patients are able to use wireless communication technologies to deliver health care.[124]This is evident inrural communities. Rural residents typically have to travel to longer distances to access healthcare than urban counterparts due to physician shortages and healthcare facility closures in these areas.[125][126]Telehealth eliminates this barrier as health professionals are able to conduct medical consultations through the use of wireless communication technologies. However, this process is dependent on both parties having internet access and comfort level with technology, which poses barriers for many low-income and rural communities.[124][127][128][129] Telehealth allows the patient to be monitored between physician office visits which can improve patient health. Telehealth also allows patients to access expertise which is not available in their local area. This remote patient monitoring ability enables patients to stay at home longer and helps avoid unnecessary hospital time. In the long-term, this could potentially result in less burdening of the healthcare system and consumption of resources.[1][130] During the COVID-19 pandemic, there were large increases in the use of telemedicine for primary care visits within the United States, increasing from an average of 1.4 million visits in Q2 of 2018 and 2019 to 35 million visits in Q2 2020, according to data fromIQVIA.[131]The telehealth market is expected to grow at 40% a year in 2021. Use of telemedicine by General Practitioners in the UK rose from 20 to 30% pre-COVID to almost 80% by the beginning of 2021. More than 70% of practitioners and patients were satisfied with this.[132]Boris Johnsonwas said to have "piled pressure on GPs to offer more in-person consultations" supporting a campaign largely orchestrated by theDaily Mail. TheRoyal College of General Practitionerssaid that a patient "right" to have face-to-face appointments if they wished was "undeliverable".[133] The technological advancement of wireless communication devices is a major development in telehealth.[134]This allows patients to self-monitor their health conditions and to not rely as much on health care professionals. Furthermore, patients are more willing to stay on their treatment plans as they are more invested and included in the process as the decision-making is shared.[135][136]Technological advancement also means that health care professionals are able to use better technologies to treat patients for example in maternal care[137]andsurgery. A 2023 study published in theJournal of the American College of Surgeonsshowed telemedicine as making a positive impact, with expectations exceeded for those physicians and patients who had consulted online for surgeries.[138]Technological developments in telehealth are essential to improve health care, especially the delivery of healthcare services, as resources are finite along with an ageing population that is living longer.[134][135][136] Restrictive licensure laws in the United States require a practitioner to obtain a full license to deliver telemedicine care across state lines. Typically, states with restrictive licensure laws also have several exceptions (varying from state to state) that may release an out-of-state practitioner from the additional burden of obtaining such a license. A number of states require practitioners who seek compensation to frequently deliver interstate care to acquire a full license. If a practitioner serves several states, obtaining this license in each state could be an expensive and time-consuming proposition. Even if the practitioner never practices medicine face-to-face with a patient in another state, he/she still must meet a variety of other individual state requirements, including paying substantial licensure fees, passing additional oral and written examinations, and traveling for interviews. In 2008, the U.S. passed the Ryan Haight Act which required face-to-face or valid telemedicine consultations prior to receiving a prescription.[139] State medical licensing boardshave sometimes opposed telemedicine; for example, in 2012 electronic consultations were illegal in Idaho, and an Idaho-licensed general practitioner was punished by the board for prescribing an antibiotic, triggering reviews of her licensure and board certifications across the country.[140]Subsequently, in 2015 the state legislature legalized electronic consultations.[140] In 2015, Teladoc filed suit against theTexas Medical Boardover a rule that required in-person consultations initially; the judge refused to dismiss the case, noting that antitrust laws apply to state medical boards.[141] Telehealth allows multiple, varying disciplines to merge and deliver a potentially more uniform level of care, using technology. As telehealth proliferates mainstream healthcare, it challenges notions of traditional healthcare delivery. Some populations experience better quality, access and more personalized health care.[142][143] Telehealth can also increase health promotion efforts. These efforts can now be more personalised to the target population and professionals can extend their help into homes or private and safe environments in which patients of individuals can practice, ask and gain health information.[130][136][144]Health promotion using telehealth has become increasingly popular inunderdeveloped countrieswhere there are very poor physical resources available. There has been a particular push towardmHealthapplications as many areas, even underdeveloped ones have mobile phone and smartphone coverage.[145][146][147] In a 2015 article reviewing research on the use of a mobile health application in the United Kingdom,[148]authors describe how a home-based application helped patients manage and monitor their health and symptoms independently. The mobile health application allows people to rapidly self-report their symptoms – 95% of patients were able to report their daily symptoms in less than 100 seconds, which is less than the 5 minutes (plus commuting) taken to measure vital signs by nurses in hospitals.[149]Online applications allow patients to remain at home to keep track of the progression of their chronic illnesses. The downside of using mHealth applications is that not everyone, especially in developing countries, has daily access to internet or electronic devices.[150] Indeveloped countries, health promotion efforts using telehealth have been met with some success. TheAustralian hands-free breastfeeding Google Glass applicationreported promising results in 2014. This application made in collaboration with the Australian Breastfeeding Association and a tech startup calledSmall World Social, helped new mothers learn how tobreastfeed.[151][152][153]Breastfeeding is beneficial to infant health andmaternal healthand is recommended by theWorld Health Organisationand health organisations all over the world.[154][155]Widespread breastfeeding can prevent 820,000 infant deaths globally but the practice is often stopped prematurely or intents to do are disrupted due to lack of social support, know-how or other factors.[155]This application gave mother's hands-free information on breastfeeding, instructions on how to breastfeed and also had an option to call a lactation consultant over Google Hangout. When the trial ended, all participants were reported to be confident in breastfeeding.[153] Ascientific reviewindicates that, in general, outcomes of telemedicine are or can be as good as in-person care with health care use staying similar.[156] Advantages of the nonexclusive adoption of already existing telemedicine technologies such as smartphonevideotelephonymay include reduced infection risks,[158]increased control of disease during epidemic conditions,[159]improved access to care,[160]reduced stress and exposure to other pathogens[161][162]during illness for better recovery, reduced time[163]and labor costs, efficient more accessible matching of patients with particular symptoms and clinicians who are experts for such, and reduced travel while disadvantages may include privacy breaches (e.g. due to software backdoors and vulnerabilities or sale of data), dependability on Internet access[158]and, depending on various factors, increased health care use.[additional citation(s) needed] Theoretically, the whole health system could benefit from telehealth. There are indications telehealth consumes fewer resources and requires fewer people to operate it with shorter training periods to implement initiatives.[14]Commenters suggested that lawmakers may fear that making telehealth widely accessible, without any other measures, would lead to patients using unnecessary health care services.[160]Telemedicine could also be used for connected networks between health care professionals.[164] Telemedicine also can eliminate the possible transmission ofinfectious diseasesorparasitesbetween patients and medical staff. This is particularly an issue whereMRSAis a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example,white coat syndromemay be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration. However, whether or not the standard of health care quality is increasing is debatable, with some literature refuting such claims.[143][165][166]Research has reported that clinicians find the process difficult and complex to deal with.[165][167]Furthermore, there are concerns around informed consent, legality issues as well as legislative issues. A recent study also highlighted that the swift and large-scale implementation of telehealth across the United Kingdom NHS Allied Health Professional (AHP) services might increase disparities in health care access for vulnerable populations with limited digital literacy.[168]Although health care may become affordable with the help of technology, whether or not this care will be "good" is the issue.[143]Many studies indicate high satisfaction with telemedicine among patients.[169]Among the factors associated with a good trust in telemedicine, the use of known and user-friendly video services and confidence in thedata protectionpolicies were the two variables contributing most to trust in telemedicine.[170] Major problems with increasing adoption include technically challenged staff, resistance to change or habits[161]and age of patient. Focused policy could eliminate several barriers.[171] A review lists a number of potentially good practices and pitfalls, recommending the use of "virtual handshakes" forconfirming identity, taking consent for conducting remote consultation over a conventional meeting, and professional standardized norms for protecting patient privacy and confidentiality.[172]It also found that theCOVID-19 pandemicsubstantially increased, voluntarily, the adoption of telephone or video consultation and suggests that telemedicine technology "is a key factor in delivery of health care in the future".[172] Technologies’ growing involvement in health care has led to continuous improvement in access, efficiency and quality of care, but numerous challenges lie with addressing the barriers that impair the geriatric population from benefitting the use of this new technology.[173]With the COVID-19 pandemic, rapid implementation of telehealth in geriatric outpatient clinics occurred. Although time efficiency was greatly improve and increase access to geriatric patients with lack of transportion to the clinic, there are complications that arised during and after implementation with many appointments requiring rescheduling due to language barrier, poor connection, hard of hearing, or inability to perform assessments.[174]Studies also show that patients and their family member often show preference to in-person visits. Although benefits were seen in being able to see a provider sooner, high-quality audio and video, and functionality to allow family participation during visits, patients and family noted preferences for in-person visits even still due to difficulty in using the service.[175]Currently new improvements are being made to help ease the complications of telehealth for geriatric patients. This inlcudes integrated captions on video calls for the hearing impaired, virtual interpreters that will attend the calls for language differences, government assisted internet services, increase training to medical providers and patients on telehealth use, etc. Due to its digital nature it is often assumed that telehealth saves the health system money. However, the evidence to support this is varied. When conducting economic evaluations of telehealth services, the individuals evaluating them need to be aware of potential outcomes and extraclinical benefits of the telehealth service.[176]Economic viability relies on the funding model within the country being examined (public vs private), the consumers willingness-to-pay, and the expected remuneration by the clinicians or commercial entities providing the services (examples of research on these topics from teledermoscopy in Australia)[177][178][179] In a UK telehealth trial done in 2011, it was reported that the cost of health could be dramatically reduced with the use of telehealth monitoring. The usual cost ofin vitro fertilisation(IVF) per cycle would be around $15,000; with telehealth it was reduced to $800 per patient.[180]InAlaska the Federal Health Care Access Network, which connects 3,000 healthcare providers to communities, engaged in 160,000 telehealth consultations from 2001 and saved the state $8.5 million in travel costs for justMedicaidpatients.[181] Digital interventions for mental health conditionsseem to be cost-effective compared to no intervention or non-therapeutic responses such as monitoring. However, when compared to in-person therapy or medication their added value is currently uncertain.[182] Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them.[183]Recent developments inmobile collaborationtechnology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place.[184]Remote patient monitoring throughmobile technologycan reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care.[185]It may also be preferable for patients with limited mobility, for example, patients with Parkinson's disease.[51]Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.[186] Remote surgeryand types of videoconferencing for sharing expertise (e.g.ad hocassistance) have been and could be used to support doctors in Ukraine during the2022 Russian invasion of Ukraine.[187] While many branches of medicine have wanted to fully embrace telehealth for a long time, there are certain risks and barriers which bar the full amalgamation of telehealth intobest practice. For a start, it is dubious as to whether a practitioner can fully leave the "hands-on" experience behind.[143]Although it is predicted that telehealth will replace many consultations and other health interactions, it cannot yet fully replace a physical examination, this is particularly so indiagnostics,rehabilitationormental health.[143]To minimise safety issues, researchers have suggested not offering remote consultations for some conditions (breathing problems, new psychosis, or acute chest pain, for example), when a parent is very concerned about a child, when a condition has not resolved as expected or has worsened, or to people who might struggle to understand or be understood (such as those with limited English or learning difficulties).[191][192] The benefits posed by telehealth challenge the normative means of healthcare delivery set in both legislation and practice. Therefore, the growing prominence of telehealth is starting to underscore the need for updated regulations, guidelines and legislation which reflect the current and future trends of healthcare practices.[2][143]Telehealth enables timely and flexible care to patients wherever they may be; although this is a benefit, it also poses threats toprivacy,safety, medical licensingandreimbursement. When a clinician and patient are in different locations, it is difficult to determine which laws apply to the context.[193]Once healthcare crosses borders different state bodies are involved in order to regulate and maintain the level of care that is warranted to the patient or telehealth consumer. As it stands, telehealth is complex with many grey areas when put into practice especially as it crosses borders. This effectively limits the potential benefits of telehealth.[2][143] An example of these limitations include the current American reimbursement infrastructure, whereMedicarewill reimburse for telehealth services only when a patient is living in an area where specialists are in shortage, or in particular rural counties. The area is defined by whether it is a medical facility as opposed to a patient's' home. The site that the practitioner is in, however, is unrestricted. Medicare will only reimburse live video (synchronous) type services, not store-and-forward, mhealth or remote patient monitoring (if it does not involve live-video). Some insurers currently will reimburse telehealth, but not all yet. So providers and patients must go to the extra effort of finding the correct insurers before continuing. Again in America, states generally tend to require that clinicians are licensed to practice in the surgery' state, therefore they can only provide their service if licensed in an area that they do not live in themselves.[140] More specific and widely reaching laws, legislations and regulations will have to evolve with the technology. They will have to be fully agreed upon, for example, will all clinicians need full licensing in every community they provide telehealth services too, or could there be a limited use telehealth licence? Would the limited use licence cover all potential telehealth interventions, or only some? Who would be responsible if an emergency was occurring and the practitioner could not provide immediate help – would someone else have to be in the room with the patient at all consult times? Which state, city or country would the law apply in when a breach or malpractice occurred?[143][194] A major legal action prompt in telehealth thus far has been issues surrounding online prescribing and whether an appropriate clinician-patient relationship can be established online to make prescribing safe, making this an area that requires particular scrutiny.[142]It may be required that the practitioner and patient involved must meet in person at least once before online prescribing can occur, or that at least a live-video conference must occur, not just impersonal questionnaires or surveys to determine need.[195] Telehealth has some potential for facilitating self-management techniques in health care, but for patients to benefit from it, the appropriate contact with, and relationship, between doctor and patient must be established first.[196]This would start with an online consultation, providing patients with techniques and tools that help them participate in healthy behaviors, and initiating a collaborative partnership between health care professionals and patient.[197]Self-management strategies fall into a broader category called patient activation, which is defined as a "patients' willingness and ability to take independent actions to manage their health."[198]It can be achieved by increasing patients' knowledge and confidence in coping with and managing their own disease through a "regular assessment of progress [...] and problem-solving support."[197]Teaching patients about their conditions and ways to cope with chronic illnesses will allow them to be knowledgeable about their disease and willing to manage it, improving their everyday life. Without a focus on the doctor-patient relationship and on the patient's understanding, telehealth cannot improve the quality of life of patients, despite the benefit of allowing them to do their medical check-ups from the comfort of their home. The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk thatprotected health informationmay be compromised through electronic storage and transmission.[199]There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that ateledermatologyconsultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation.[200]Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor.[201]Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.[44]Some medical organizations have delivered position statement on the correct use of telemedicine in their field.[202][203][204][205] Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient with a bacterial infection might be given anantibiotichypodermic injectionin the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form. Equitability is also a concern. Many families and individuals in the United States, and other countries, do not have internet access in their homes or the proper electronic devices to access services such as a laptop or smartphone.[citation needed] Informed consentis another issue. When telehealth includes the possibility for technical problems such as transmission errors, security breaches, or storage issues, it can impact the system's ability to communicate. It may be wise to obtain informed consent in person first, as well as having backup options for when technical issues occur. In person, a patient can see who is involved in their care (namely themselves and their clinician in a consult), but online there will be other involved such as the technology providers, therefore consent may need to involve disclosure of anyone involved in the transmission of the information and the security that will keep their information private, and any legal malpractice cases may need to involve all of those involved as opposed to what would usually just be the practitioner.[142][194][195] The rate of adoption of telehealth services in any jurisdiction is frequently influenced by factors such as the adequacy and cost of existing conventionalhealth servicesin meetingpatient needs; the policies of governments and/or insurers with respect to coverage and payment for telehealth services; and medical licensing requirements that may inhibit or deter the provision of telehealth second opinions or primary consultations by physicians. Projections for the growth of the telehealth market are optimistic, and much of this optimism is predicated upon the increasing demand for remote medical care. According to a recent survey, nearly three-quarters of U.S. consumers say they would use telehealth.[206]At present, several major companies along with a bevvy of startups are working to develop a leading presence in the field. In the UK, the Government's Care Services minister, Paul Burstow, has stated that telehealth andtelecarewould be extended over the next five years (2012–2017) to reach three million people. In the United States, telemedicine companies are collaborating with health insurers and other telemedicine providers to expand marketshare and patient access to telemedicine consultations. As of 2019[update], 95% of employers believe their organizations will continue to provide health care benefits over the next five years.[207] The COVID-19 pandemic drove increased usage of telehealth services in the U.S. The U.S. Centers for Disease Control and Prevention reported a 154% increase in telehealth visits during the last week of March 2020, compared to the same dates in 2019.[208] From 1999 to 2018, theUniversity Hospital of Zurich(USZ) offered clinical telemedicine and online medical advice on the Internet. A team of doctors answered around 2500 anonymous inquiries annually, usually within 24 to 48 hours. The team consisted of up to six physicians who are specialists in clinical telemedicine at the USZ and have many years of experience, particularly in internal and general medicine. In the entire period, 59360 inquiries were sent and answered.[209]The majority of the users were female and on average 38 years old. However, in the course of time, considerably more men and older people began to use the service. The diversity of medical queries covered all categories of theInternational Statistical Classification of Diseases and Related Health Problems(ICD) and correlated with the statistical frequency of diseases in hospitals in Switzerland. Most of the inquiries concerned unclassified symptoms and signs, services related to reproduction, respiratory diseases, skin diseases, health services, diseases of the eye and nervous systems, injuries and disorders of the female genital tract. As with the Swedish online medical advice service,[210]one-sixth of the requests related to often shameful andstigmatiseddiseases of the genitals, gastrointestinal tract, sexually transmitted infections, obesity and mental disorders. By providing an anonymous space where users can talk about (shameful) diseases, online telemedical services empower patients and their health literacy is enhanced by providing individualized health information. The Clinical Telemedicine and Online Counselling service of the University Hospital of Zurich is currently being revised and will be offered in a new form in the future.[211] For developing countries, telemedicine andeHealthcan be the only means of healthcare provision in remote areas. For example, the difficult financial situation in many African states and lack of trained health professionals has meant that the majority of the people in sub-Saharan Africa are badly disadvantaged in medical care, and in remote areas with low population density, direct healthcare provision is often very poor[212]However, provision of telemedicine and eHealth from urban centers or from other countries is hampered by the lack of communications infrastructure, with no landline phone or broadband internet connection, little or no mobile connectivity, and often not even a reliable electricity supply.[213] India has broad rural-urban population and rural India is bereaved from medical facilities, giving telemedicine a space for growth in India. Deprived education and medical professionals in rural areas is the reason behind government's ideology to use technology to bridge this gap. Remote areas not only present a number of challenges for the service providers but also for the families who are accessing these services. Since 2018, telemedicine has expanded in India. It has undertaken a new way for doctor consultations. On 25 March 2020, in the wake of COVID-19 pandemic, theMinistry of Health and Family Welfareissued India's Telemedicine Practice Guidelines.[214]The Board of Governors entasked by the Health Ministry published an amendment to theIndian Medical Council(Professional Conduct, Etiquette and Ethics) Regulations, 2002 that gave much-needed statutory support for the practice of telemedicine in India. This sector is at an ever-growing stage with high scope of development.[215]In April 2020, the union health ministry launched the eSanjeevani telemedicine service that operates at two levels: the doctor-to-doctor telemedicine platform, and the doctor-to-patient platform. This service crossed five million tele-consultations within a year of its launch indicating conducive environment for acceptability and growth of telemedicine in India.[216] Sub-Saharan Africa is marked by the massive introduction of new technologies and internet access.[217]Urban areas are facing a rapid change and development, and access to internet and health is rapidly improving. Population in remote areas however, still lack access to healthcare and modern technologies. Some people in rural regions must travel more between 2 and 6 hours to reach the closest healthcare facilities of their country.[218]leaving room for telehealth to grow and reach isolated people in the near future. The Satellite African eHEalth vaLidation (SAHEL) demonstration project has shown howsatellite broadbandtechnology can be used to establish telemedicine in such areas. SAHEL was started in 2010 in Kenya and Senegal, providing self-contained, solar-powered internet terminals to rural villages for use by community nurses for collaboration with distant health centers for training, diagnosis and advice on local health issues.[219]Those methods can have major impact on both health professionals to get and provide training from remote areas, and on the local population who can receive care without traveling long distances. Some non-profits provide internet to rural places around the world using a mobile VSAT terminal. This VSAT terminal equips remote regions allowing them to alert the world when there is a medical emergency, resulting in a rapid deployment or response from developed countries.[220]Technologies such as the ones used by MAF allows health professionals in remote clinics to have internet access, making consultations much easier, both for patients and doctors. In 2014, the government of Luxembourg, along with satellite operators and NGOs establishedSATMED, a multilayer eHealth platform to improve public health in remote areas of emerging and developing countries, using theEmergency.ludisaster relief satellite platform and theAstra 2GTV satellite.[221]SATMED was first deployed in response to a report in 2014 by German Doctors of poor communications in Sierra Leone hampering the fight against Ebola, and SATMED equipment arrived in the Serabu clinic inSierra Leonein December 2014.[222][223]In June 2015 SATMED was deployed at Maternité Hospital in Ahozonnoude, Benin to provide remote consultation and monitoring, and is the only effective communication link between Ahozonnoude, the capital and a third hospital in Allada, since land routes are often inaccessible due to flooding during the rainy season.[224][225] The development and history of telehealth or telemedicine (terms used interchangeably in literature) is deeply rooted in the history and development in not only technology but also society itself. Humans have long sought to relay important messages throughtorches,optical telegraphy,electroscopes, andwireless transmission. Early forms of telemedicine achieved with telephone and radio have been supplemented withvideotelephony, advanceddiagnostic methodssupported bydistributed client/server applications, and additionally with telemedical devices to support in-home care.[16] In the 21st century, with the advent of theinternet,portable devicesand other such digital devices are taking a transformative role in healthcare and its delivery.[226] Although,traditional medicinerelies on in-person care, the need and want for remote care has existed from the Roman and pre-Hippocratic periods in antiquity. The elderly and infirm who could not visit temples for medical care sent representatives to convey information on symptoms and bring home a diagnosis as well as treatment.[226]In Africa, villagers would usesmoke signalsto warn neighboring villages of disease outbreak.[227]The beginnings of telehealth have existed through primitive forms of communication and technology.[226]The exact date of origin for Telehealth is unknown, but it was known to have been used during theBubonic Plague. That version of telehealth was far different from how we know it today. During that time, they were communicating byheliographand bonfire. Those were used to notify other groups of people about famine and war.[228]Those are not using any form of technology yet but are starting to spread the idea of connectivity among groups of people who geographically could not be together. As technology developed and wired communication became increasingly commonplace, the ideas surrounding telehealth began emerging. The earliest telehealth encounter can be traced toAlexander Graham Bellin 1876, when he used his early telephone as a means of getting help from his assistant Mr. Watson after he spilt acid on his trousers. Another instance of early telehealth, specifically telemedicine was reported inThe Lancetin 1879. An anonymous writer described a case where a doctor successfully diagnosed a child over the telephone in the middle of the night.[226]This Lancet issue, also further discussed the potential of Remote Patient Care in order to avoid unnecessary house visits, which were part of routine health care during the 1800s.[226][229]Other instances of telehealth during this period came from theAmerican Civil War, during which telegraphs were used to deliver casualty/mortality lists, medical care to soldiers,[229]and ordering further medical supplies.[230] As the 1900s started, physicians quickly found a use for the telephone making it a prime communication channel to contact patients and other physicians.[228]Over the next fifty-plus years, the telephone was a staple for medical communication. As the 1930s came around, radio communication played a key role, especially during World War I. It was specifically used to communicate with remote areas such as Alaska and Australia.[228]They used the radio to communicate medical information. During the Vietnam War, radio communication had become more advanced and was now used to send medical teams in helicopters to help. This then brought together the Aerial Medical Service (AMS) who used telegraphs, radios, and planes to help care for people who lived in remote areas. From the late 1800s to the early 1900s the early foundations ofwireless communicationwere laid down.[226]Radiosprovided an easier and near instantaneous form of communication. The use of radio to deliver healthcare became accepted for remote areas.[226][130]TheRoyal Flying Doctor Service of Australiais an example of the early adoption of radios in telehealth.[227] In 1925 the inventorHugo Gernsbackwrote an article for the magazineScience and Inventionwhich included a prediction of a future where patients could be treated remotely by doctors through a device he called a "teledactyl". His descriptions of the device are similar to what would later become possible with new technology.[231] When the AmericanNational Aeronautics and Space Administration (NASA)began plans to send astronauts into space, the need for telemedicine became clear. In order to monitor their astronauts in space, telemedicine capabilities were built into thespacecraftas well as the firstspacesuits.[226][130]Additionally, during this period, telehealth and telemedicine were promoted in different countries especially the United States and Canada.[226]Carrier Sekani Family Serviceshelped pioneer telehealth in British Columbia and Canada, according to its CEO Warner Adam.[232]After the telegraph and telephone started to successfully help physicians treat patients from remote areas, telehealth became more recognized. Technological advancements occurred when NASA sent men to space. Engineers for NASA created biomedical telemetry andtelecommunicationssystems.[228]NASA technology monitored vitals such as blood pressure, heart rate, respiration rate, and temperature. After the technology was created, it then became the base of telehealth medicine for the public. Massachusetts General Hospitaland Boston'sLogan International Airporthad a role in the early use of telemedicine, which more or less coincided with NASA's foray into telemedicine through the use of physiologic monitors for astronauts.[233]On October 26, 1960, a plane struck a flock of birds upon takeoff, killing many passengers and leaving a number wounded. Due to the extreme complexity of trying to get all the medical personnel out from the hospital, the practical solution became telehealth.[234]This was expanded upon in 1967, when Kenneth Bird at Massachusetts General founded one of the first telemedicine clinics. The clinic addressed the fundamental problem of delivering occupational and emergency health services to employees and travellers at the airport, located three congested miles from the hospital. Clinicians at the hospital would provide consultation services to patients who were at the airport. Consultations were achieved through microwave audio as well as video links.[226][235]The airport began seeing over a hundred patients a day at its nurse-run clinic that cared for victims of plane crashes and other accidents, taking vital signs, electrocardiograms, and video images that were sent to Massachusetts General.[236]Over 1,000 patients are documented as having received remote treatment from doctors at MGH using the clinic's two-way audiovisual microwave circuit.[237]One notable story featured a woman who got off a flight in Boston and was experiencing chest pain. They performed a workup at the airport, took her to the telehealth suite where Raymond Murphy appeared on the television, and had a conversation with her. While this was happening, another doctor took notes and the nurses took vitals and any test that Murphy ordered.[234]At this point, telehealth was becoming more mainstream and was starting to become more technologically advanced, which created a viable option for patients. In 1964, the Nebraska Psychiatric Institute began using television links to form two-way communication with the Norfolk State Hospital which was 112 miles away for the education and consultation purposes between clinicians in the two locations.[235] In 1972 theDepartment of Health, Education and Welfarein the United States approved funding for seven telemedicine projects across different states. This funding was renewed and two further projects were funded the following year.[226][235] In March 1972, the San Bernardino County Medical Society officially implemented its Tel-Med program, a system of prerecorded health-related messages, with a log of 50 tapes.[238][239]The nonprofit initiative began in 1971 as a local medical project to ease the doctor shortage in the expandingSan Bernardino Valleyand improve the public's access to sound medical information.[239][238]It covered subjects ranging fromcannabistovaginitis.[238]In January 1973, in response to the developing "London flu" epidemic hitting California and the country, a tape providing information on the disease was on air within a week after news broke of the flu spreading in the state.[240]That spring, programs were implemented inSan DiegoandIndianapolis, Indiana, signaling a national acceptance of the concept.[239]By 1979, its system offered messages on over 300 different subjects, 200 of which were available in Spanish as well as English, and serviced over 65 million people in 180 cities around the country.[238] Telehealth projects underway before and during the 1980s would take off but fail to enter mainstream healthcare.[227][130]As a result, this period of telehealth history is called the "maturation" stage and made way for sustainable growth.[226]Although state funding in North America was beginning to run low, different hospitals began to launch their own telehealth initiatives.[226]NASA provided an ATS-3 satellite to enable medical care communications ofAmerican Red CrossandPan American Health Organizationresponse teams, following the1985 Mexico City earthquake. The agency then launched its SateLife/HealthNet programme to increase health service connectivity in developing countries. In 1997, NASA sponsoredYale's Medical Informatics and Technology Applications Consortium project.[130][241] Florida first experimented with "primitive" telehealth in itsprisonsduring the latter 1980s.[242]Working with Doctors Oscar W. Boultinghouse and Michael J. Davis, from the early 1990s to 2007; Glenn G. Hammack led theUniversity of Texas Medical Branch(UTMB) development of a pioneering telehealth program inTexas state prisons. The three UTMB alumni would, in 2007, co-found telehealth provider NuPhysician.[243] The first interactive telemedicine system, operating over standard telephone lines, designed toremotely diagnoseand treat patients requiring cardiac resuscitation (defibrillation) was developed and launched by an American company, MedPhone Corporation, in 1989. A year later under the leadership of its President/CEO S Eric Wachtel, MedPhone introduced a mobile cellular version, the MDPhone. Twelve hospitals in the U.S. served as receiving and treatment centers.[244] As the expansion of telehealth continued in 1990 Maritime Health Services (MHS) was a big part of the initiation for occupational health services. They sent a medical officer aboard the Pacific trawler that allowed for round-the-clock communication with a physician. The system that allows for this is called the Medical Consultation Network (MedNet). MedNet is a video chatting system that has live audio and visual so the physician on the other end of the call can see and hear what is happening. MetNet can be used from anywhere, not just aboard ships.[228]Being able to provide onsite visual information allows remote patients expert emergency help and medical attention that saves money as well as lives. This has created a demand for at-home monitoring. At-home care has also become a large part of telehealth. Doctors or nurses will now give pre-op and post-op phone calls to check-in. There are also companies such asLifeline, which give the elderly a button to press in case of an emergency. That button will automatically call for emergency help. If someone has surgery and then is sent home, telehealth allows physicians to see how the patient is progressing without them having to stay in the hospital. TeleDiagnostic Systems of San Francisco is a company that has created a device that monitors sleep patterns, so people with sleep disorders do not have to stay the night at the hospital.[228]Another at-home device that was created was the Wanderer, which was attached to Alzheimer's patients or people who had dementia. It was attached to them so when they wandered off it notified the staff to allow them to go after them. All these devices allowed healthcare beyond hospitals to improve, which means that more people are being helped efficiently. The advent of high-speedInternet, and the increasing adoption ofICTin traditional methods of care, spurred advances in telehealth delivery.[14]Increased access to portable devices, like laptops and mobile phones, made telehealth more plausible; the industry then expanded into health promotion, prevention and education.[1][3][130] In 2002, G. Byron Brooks, a former NASA surgeon and engineer who had also helped manage theUTMBTelemedicine program, co-foundedTeladocinDallas, Texas, which was then launched in 2005 as the first national telehealth provider.[245] In the 2010s, integration of smart home telehealth technologies, such as health and wellness devices, software, and integratedIoT, has accelerated the industry. Healthcare organizations are increasingly adopting the use of self-tracking and cloud-based technologies, and innovative data analytic approaches to accelerate telehealth delivery.[citation needed][246] In 2015,Mercy Health systemopenedMercy Virtual, in Chesterfield, Missouri, the world's first medical facility dedicated solely to telemedicine.[247] Telehealth expanded significantly during theCOVID-19pandemic, becoming a vital means of medical communication. It allows doctors to return to humanizing the patient.[248]It forces them to listen to what people have to say and from there make a diagnosis. Studies have demonstrated high trust in telehealth expressed by patients during theCOVID-19pandemic.[249]Among patients withInflammatory bowel disease4 out of 5 considered telemedicine as valuable tool for their management, and 85%) wanted to have a telemedicine service at their center. However, only 1 out of 4 believed that it may guarantee the same level of care as the in-person visit.[250]Some researchers claim this creates an environment that encourages greater vulnerability among patients in self disclosure in the practice of narrative medicine.[248]Telehealth allows for Zoom calls and video chats from across the world checking in on patients and speaking to physicians. Universities are now ensuring that medical students graduate with proficient telehealth communication skills.[251]Experts suggest that telehealth has become a vital part of medical care; with more virtual options becoming available. The pandemic era also identified the "potential to significantly improve global health equity" through telehealth and other virtual care technologies.[252] A retrospective study in 2023 examining 1,589,014 adult primary care patients within an integrated healthcare system in the U.S. found that patients who initially had a telehealth visit were less likely to receive prescriptions, lab tests, or imaging compared to those who had an in-office visit. However, these same telehealth patients had higher rates of in-person follow-up visits. The study revealed that, out of the 2,357,598 primary care visits, 49.2% were in-office, 31.3% were telephone visits, and 19.5% were video visits. Office visits led to higher rates of prescriptions (46.8%), lab tests (41.4%), and imaging (20.5%) compared to video (38.4%, 27.4%, and 11.9%, respectively) and telephone visits (34.6%, 22.8%, and 8.7%, respectively). In contrast, patients who had telephone or video visits were more likely to have in-person follow-up visits, with 7.6% of telephone, 6.2% of video, and only 1.3% of office visit patients returning for primary care follow-up. Furthermore, rates of emergency department visits and hospitalizations were higher for those who had telemedicine visits, though these differences were minimal. The study's limitations include the inability to generalize findings to healthcare settings without telemedicine services or to patients without insurance or a primary care provider. The reasons for increased in-person healthcare utilization were also not captured, and long-term follow-up was not conducted.[253] Media related toTelemedicineat Wikimedia Commons
https://en.wikipedia.org/wiki/Telemedicine
Coding best practicesorprogramming best practicesare a set of informal, sometimes personal, rules (best practices) that manysoftware developers, incomputer programmingfollow to improvesoftware quality.[1]Many computer programs require being robust and reliable for long periods of time,[2]so any rules need to facilitate both initial development and subsequent maintenance ofsource codeby people other than the original authors. In theninety–ninety rule, Tom Cargill explains why programming projects often run late: "The first 90% of the code takes the first 90% of the development time. The last 10% takes another 90% of the time."[3]Any guidance which can redress this lack of foresight is worth considering. The size of a project or program has a significant effect on error rates,programmer productivity, and the amount of management needed.[4] As listed below, there are many attributes associated with goodsoftware. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency.[5]Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes. Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it:Maintainability,dependability,efficiencyandusability.[6] Weinberg has identified four targets which a good program should meet:[7] Hoarehas identified seventeen objectives related to software quality, including:[8] Before coding starts, it is important to ensure that all necessary prerequisites have been completed (or have at least progressed far enough to provide a solid foundation for coding). If the various prerequisites are not satisfied, then the software is likely to be unsatisfactory, even if it is completed. From Meek & Heath: "What happens before one gets to the coding stage is often of crucial importance to the success of the project."[9] The prerequisites outlined below cover such matters as: For small simple projects it may be feasible to combine architecture with design and adopt a very simple life cycle. A software development methodology is a framework that is used to structure, plan, and control the life cycle of a software product. Common methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,agile software development,rapid application development, andextreme programming. The waterfall model is a sequential development approach; in particular, it assumes that the requirements can be completely defined at the start of a project. However, McConnell quotes three studies that indicate that, on average, requirements change by around 25% during a project.[10]The other methodologies mentioned above all attempt to reduce the impact of such requirement changes, often by some form of step-wise, incremental, or iterative approach. Different methodologies may be appropriate for different development environments. Since its introduction in 2001, agile software development has grown in popularity, fueled by software developers seeking a more iterative, collaborative approach to software development.[11] McConnell states: "The first prerequisite you need to fulfill before beginning construction is a clear statement of the problem the system is supposed to solve."[12] Meek and Heath emphasise that a clear, complete, precise, and unambiguous written specification is the target to aim for.[13]Note that it may not be possible to achieve this target, and the target is likely to change anyway (as mentioned in the previous section). Sommerville distinguishes between less detailed user requirements and more detailed system requirements.[14]He also distinguishes between functional requirements (e.g. update a record) and non-functional requirements (e.g. response time must be less than 1 second). Hoare points out: "there are two ways of constructing a software design: one way is to make it so simple that there areobviouslyno deficiencies; the other way is to make it so complicated that there are noobviousdeficiencies. The first method is far more difficult."[15](Emphasis as in the original.) Software architecture is concerned with deciding what has to be done and which program component is going to do it (how something is done is left to the detailed design phase below). This is particularly important when a software system contains more than one program since it effectively defines the interface between these various programs. It should include some consideration of any user interfaces as well, without going into excessive detail. Any non-functional system requirements (response time, reliability, maintainability, etc.) need to be considered at this stage.[16] The software architecture is also of interest to various stakeholders (sponsors, end-users, etc.) since it gives them a chance to check that their requirements can be met. The primary purpose of design is to fill in the details which have been glossed over in the architectural design. The intention is that the design should be detailed enough to provide a good guide for actual coding, including details of any particular algorithms to be used. For example, at the architectural level, it may have been noted that some data has to be sorted, while at the design level, it is necessary to decide which sorting algorithm is to be used. As a further example, if an object-oriented approach is being used, then the details of the objects must be determined (attributes and methods). Mayer states: "No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes. Understanding the problem and associated programming requirements is necessary for choosing the language best suited for the solution."[17] From Meek & Heath: "The essence of the art of choosing a language is to start with the problem, decide what its requirements are, and their relative importance since it will probably be impossible to satisfy them all equally well. The available languages should then be measured against the list of requirements, and the most suitable (or least unsatisfactory) chosen."[18] It is possible that different programming languages may be appropriate for different aspects of the problem. If the languages or their compilers permit, it may be feasible to mix routines written in different languages within the same program. Even if there is no choice as to which programming language is to be used, McConnell provides some advice: "Every programming language has strengths and weaknesses. Be aware of the specific strengths and weaknesses of the language you're using."[19] This section is also really a prerequisite to coding, as McConnell points out: "Establish programming conventions before you begin programming. It's nearly impossible to change code to match them later."[19] As listed near the end ofcoding conventions, there are different conventions for different programming languages, so it may be counterproductive to apply the same conventions across different languages. It is important to note that there is no one particular coding convention for any programming language. Every organization has a custom coding standard for each type of software project. It is, therefore, imperative that the programmer chooses or makes up a particular set of coding guidelines before the software project commences. Some coding conventions are generic, which may not apply for every software project written with a particular programming language. The use of coding conventions is particularly important when a project involves more than one programmer (there have been projects with thousands of programmers). It is much easier for a programmer to read code written by someone else if all code follows the same conventions. For some examples of bad coding conventions, Roedy Green provides a lengthy (tongue-in-cheek) article on how to produce unmaintainable code.[20] Due to time restrictions or enthusiastic programmers who want immediate results for their code, commenting of code often takes a back seat. Programmers working as a team have found it better to leave comments behind since coding usually follows cycles, or more than one person may work on a particular module. However, some commenting can decrease the cost of knowledge transfer between developers working on the same module. In the early days of computing, one commenting practice was to leave a brief description of the following: The "description of the module" should be as brief as possible but without sacrificing clarity and comprehensiveness. However, the last two items have largely been obsoleted by the advent ofrevision control systems. Modifications and their authorship can be reliably tracked by using such tools rather than by using comments. Also, if complicated logic is being used, it is a good practice to leave a comment "block" near that part so that another programmer can understand what exactly is happening. Unit testingcan be another way to show how code is intended to be used. Use of proper naming conventions is considered good practice. Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace them with meaningful ones, causing confusion. It is usually considered good practice to use descriptive names. Example: A variable for taking in weight as a parameter for a truck can be named TrkWeight, TruckWeightKilograms or Truck_Weight_Kilograms, with TruckWeightKilograms (SeePascal casenaming of variables) often being the preferable one since it is instantly recognizable, but naming convention is not always consistent between projects and/or companies. The code that a programmer writes should be simple. Complicated logic for achieving a simple thing should be kept to a minimum since the code might be modified by another programmer in the future. The logic one programmer implemented may not make perfect sense to another. So, always keep the code as simple as possible.[21] Program code should not contain "hard-coded" (literal) values referring to environmental parameters, such as absolute file paths, file names, user names, host names, IP addresses, and URLs, UDP/TCP ports. Otherwise, the application will not run on a host that has a different design than anticipated. A careful programmer can parametrize such variables and configure them for the hosting environment outside of the application proper (for example, in property files, on an application server, or even in a database). Compare the mantra of a "single point of definition".[22](SPOD). As an extension, resources such as XML files should also contain variables rather than literal values, otherwise, the application will not be portable to another environment without editing the XML files. For example, with J2EE applications running in an application server, such environmental parameters can be defined in the scope of the JVM, and the application should get the values from there. Design code with scalability as a design goal because very often in software projects, new features are always added to a project which becomes bigger. Therefore, the facility to add new features to a software code base becomes an invaluable method in writing software. Re-use is a very important design goal in software development. Re-use cuts development costs and also reduces the time for development if the components or modules which are reused are already tested. Very often, software projects start with an existing baseline that contains the project in its prior version and depending on the project, many of existing software modules and components are reused, which reduces development and testing time, therefore, increasing the probability of delivering a software project on schedule. A general overview of all of the above: A best practice for building code involves daily builds and testing, or better stillcontinuous integration, or evencontinuous delivery. Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts, and test cases are developed while the application is being designed and coded. Programmers tend to write the complete code and then begin debugging and checking for errors. Though this approach can save time in smaller projects, bigger and more complex ones tend to have too many variables and functions that need attention. Therefore, it is good to debug every module once you are done and not the entire program. This saves time in the long run so that one does not end up wasting a lot of time on figuring out what is wrong.unit testsfor individual modules and/orfunctional testsforweb servicesand web applications can help with this. Deployment is the final stage of releasing an application for users. Some best practices are:[23][24]
https://en.wikipedia.org/wiki/Best_coding_practices
Inprobability theoryandstatistics, theWeibull distribution/ˈwaɪbʊl/is a continuousprobability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page. The distribution is named after Swedish mathematicianWaloddi Weibull, who described it in detail in 1939,[1][2]although it was first identified byRené Maurice Fréchetand first applied byRosin & Rammler (1933)to describe aparticle size distribution. Theprobability density functionof a Weibullrandom variableis[3][4] wherek> 0 is theshape parameterand λ > 0 is thescale parameterof the distribution. Itscomplementary cumulative distribution functionis astretched exponential function. The Weibull distribution is related to a number of other probability distributions; in particular, itinterpolatesbetween theexponential distribution(k= 1) and theRayleigh distribution(k= 2 andλ=2σ{\displaystyle \lambda ={\sqrt {2}}\sigma }).[5] If the quantity,x,is a "time-to-failure", the Weibull distribution gives a distribution for which thefailure rateis proportional to a power of time. Theshapeparameter,k, is that power plus one, and so this parameter can be interpreted directly as follows:[6] In the field ofmaterials science, the shape parameterkof a distribution of strengths is known as theWeibull modulus. In the context ofdiffusion of innovations, the Weibull distribution is a "pure" imitation/rejection model. Applications inmedical statisticsandeconometricsoften adopt a different parameterization.[8][9]The shape parameterkis the same as above, while the scale parameter isb=λ−k{\displaystyle b=\lambda ^{-k}}. In this case, forx≥ 0, the probability density function is the cumulative distribution function is the quantile function is the hazard function is and the mean is A second alternative parameterization can also be found.[10][11]The shape parameterkis the same as in the standard case, while the scale parameterλis replaced with a rate parameterβ= 1/λ. Then, forx≥ 0, the probability density function is the cumulative distribution function is the quantile function is and the hazard function is In all three parameterizations, the hazard is decreasing for k < 1, increasing for k > 1 and constant for k = 1, in which case the Weibull distribution reduces to an exponential distribution. The form of the density function of the Weibull distribution changes drastically with the value ofk. For 0 <k< 1, the density function tends to ∞ asxapproaches zero from above and is strictly decreasing. Fork= 1, the density function tends to 1/λasxapproaches zero from above and is strictly decreasing. Fork> 1, the density function tends to zero asxapproaches zero from above, increases until its mode and decreases after it. The density function has infinite negative slope atx= 0 if 0 <k< 1, infinite positive slope atx= 0 if 1 <k< 2 and null slope atx= 0 ifk> 2. Fork= 1 the density has a finite negative slope atx= 0. Fork= 2 the density has a finite positive slope atx= 0. Askgoes to infinity, the Weibull distribution converges to aDirac delta distributioncentered atx= λ. Moreover, the skewness and coefficient of variation depend only on the shape parameter. A generalization of the Weibull distribution is thehyperbolastic distribution of type III. Thecumulative distribution functionfor the Weibull distribution is forx≥ 0, andF(x;k; λ) = 0 forx< 0. Ifx= λ thenF(x;k; λ) = 1 −e−1≈ 0.632 for all values ofk. Vice versa: atF(x;k;λ) = 0.632 the value ofx≈λ. The quantile (inverse cumulative distribution) function for the Weibull distribution is for 0 ≤p< 1. Thefailure rateh(or hazard function) is given by TheMean time between failuresMTBFis Themoment generating functionof thelogarithmof a Weibull distributedrandom variableis given by[12] whereΓis thegamma function. Similarly, thecharacteristic functionof logXis given by In particular, thenthraw momentofXis given by Themeanandvarianceof a Weibullrandom variablecan be expressed as and The skewness is given by whereΓi=Γ(1+i/k){\displaystyle \Gamma _{i}=\Gamma (1+i/k)}, which may also be written as where the mean is denoted byμand the standard deviation is denoted byσ. The excesskurtosisis given by whereΓi=Γ(1+i/k){\displaystyle \Gamma _{i}=\Gamma (1+i/k)}. The kurtosis excess may also be written as: A variety of expressions are available for the moment generating function ofXitself. As apower series, since the raw moments are already known, one has Alternatively, one can attempt to deal directly with the integral If the parameterkis assumed to be a rational number, expressed ask=p/qwherepandqare integers, then this integral can be evaluated analytically.[a]Withtreplaced by −t, one finds whereGis theMeijer G-function. Thecharacteristic functionhas also been obtained byMuraleedharan et al. (2007). Thecharacteristic functionandmoment generating functionof 3-parameter Weibull distribution have also been derived byMuraleedharan & Soares (2014)by a direct approach. LetX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}be independent and identically distributed Weibull random variables with scale parameterλ{\displaystyle \lambda }and shape parameterk{\displaystyle k}. If the minimum of thesen{\displaystyle n}random variables isZ=min(X1,X2,…,Xn){\displaystyle Z=\min(X_{1},X_{2},\ldots ,X_{n})}, then the cumulative probability distribution ofZ{\displaystyle Z}is given by That is,Z{\displaystyle Z}will also be Weibull distributed with scale parametern−1/kλ{\displaystyle n^{-1/k}\lambda }and with shape parameterk{\displaystyle k}. Fix someα>0{\displaystyle \alpha >0}. Let(π1,...,πn){\displaystyle (\pi _{1},...,\pi _{n})}be nonnegative, and not all zero, and letg1,...,gn{\displaystyle g_{1},...,g_{n}}be independent samples ofWeibull(1,α−1){\displaystyle {\text{Weibull}}(1,\alpha ^{-1})}, then[13] Theinformation entropyis given by[14] whereγ{\displaystyle \gamma }is theEuler–Mascheroni constant. The Weibull distribution is themaximum entropy distributionfor a non-negative real random variate with a fixedexpected valueofxkequal toλkand a fixed expected value of ln(xk) equal to ln(λk) −γ{\displaystyle \gamma }. TheKullback–Leibler divergencebetween two Weibull distributions is given by[15] The fit of a Weibull distribution to data can be visually assessed using a Weibull plot.[16]The Weibull plot is a plot of theempirical cumulative distribution functionF^(x){\displaystyle {\widehat {F}}(x)}of data on special axes in a type ofQ–Q plot. The axes areln⁡(−ln⁡(1−F^(x))){\displaystyle \ln(-\ln(1-{\widehat {F}}(x)))}versusln⁡(x){\displaystyle \ln(x)}. The reason for this change of variables is the cumulative distribution function can be linearized: which can be seen to be in the standard form of a straight line. Therefore, if the data came from a Weibull distribution then a straight line is expected on a Weibull plot. There are various approaches to obtaining the empirical distribution function from data. One method is to obtain the vertical coordinate for each point using wherei{\displaystyle i}is the rank of the data point andn{\displaystyle n}is the number of data points.[17][18]Another common estimator[19]is Linear regression can also be used to numerically assess goodness of fit and estimate the parameters of the Weibull distribution. The gradient informs one directly about the shape parameterk{\displaystyle k}and the scale parameterλ{\displaystyle \lambda }can also be inferred. Thecoefficient of variationof Weibull distribution depends only on the shape parameter:[20] Equating the sample quantitiess2/x¯2{\displaystyle s^{2}/{\bar {x}}^{2}}toσ2/μ2{\displaystyle \sigma ^{2}/\mu ^{2}}, the moment estimate of the shape parameterk{\displaystyle k}can be read off either from a look up table or a graph ofCV2{\displaystyle CV^{2}}versusk{\displaystyle k}. A more accurate estimate ofk^{\displaystyle {\hat {k}}}can be found using a root finding algorithm to solve The moment estimate of the scale parameter can then be found using the first moment equation as Themaximum likelihood estimatorfor theλ{\displaystyle \lambda }parameter givenk{\displaystyle k}is[20] The maximum likelihood estimator fork{\displaystyle k}is the solution forkof the following equation[21] This equation definesk^{\displaystyle {\widehat {k}}}only implicitly, one must generally solve fork{\displaystyle k}by numerical means. Whenx1>x2>⋯>xN{\displaystyle x_{1}>x_{2}>\cdots >x_{N}}are theN{\displaystyle N}largest observed samples from a dataset of more thanN{\displaystyle N}samples, then the maximum likelihood estimator for theλ{\displaystyle \lambda }parameter givenk{\displaystyle k}is[21] Also given that condition, the maximum likelihood estimator fork{\displaystyle k}is[citation needed] Again, this being an implicit function, one must generally solve fork{\displaystyle k}by numerical means. The Weibull distribution is used[citation needed] f(x;k,λ,θ)=kλ(x−θλ)k−1e−(x−θλ)k{\displaystyle f(x;k,\lambda ,\theta )={k \over \lambda }\left({x-\theta \over \lambda }\right)^{k-1}e^{-\left({x-\theta \over \lambda }\right)^{k}}\,} X=(Wλ)k{\displaystyle X=\left({\frac {W}{\lambda }}\right)^{k}} fFrechet(x;k,λ)=kλ(xλ)−1−ke−(x/λ)−k=fWeibull(x;−k,λ).{\displaystyle f_{\rm {Frechet}}(x;k,\lambda )={\frac {k}{\lambda }}\left({\frac {x}{\lambda }}\right)^{-1-k}e^{-(x/\lambda )^{-k}}=f_{\rm {Weibull}}(x;-k,\lambda ).} f(x;P80,m)={1−eln⁡(0.2)(xP80)mx≥0,0x<0,{\displaystyle f(x;P_{\rm {80}},m)={\begin{cases}1-e^{\ln \left(0.2\right)\left({\frac {x}{P_{\rm {80}}}}\right)^{m}}&x\geq 0,\\0&x<0,\end{cases}}} F(x;k,λ)={∫0∞1νF(x;1,λν)(Γ(1k+1)Nk(ν))dν,1≥k>0;or∫0∞1sF(x;2,2λs)(2πΓ(1k+1)Vk(s))ds,2≥k>0;{\displaystyle F(x;k,\lambda )={\begin{cases}\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\,F(x;1,\lambda \nu )\left(\Gamma \left({\frac {1}{k}}+1\right){\mathfrak {N}}_{k}(\nu )\right)\,d\nu ,&1\geq k>0;{\text{or }}\\\displaystyle \int _{0}^{\infty }{\frac {1}{s}}\,F(x;2,{\sqrt {2}}\lambda s)\left({\sqrt {\frac {2}{\pi }}}\,\Gamma \left({\frac {1}{k}}+1\right)V_{k}(s)\right)\,ds,&2\geq k>0;\end{cases}}}
https://en.wikipedia.org/wiki/Weibull_distribution
Web3(also known asWeb 3.0)[1][2][3]is an idea for a new iteration of theWorld Wide Webwhich incorporates concepts such asdecentralization,blockchain technologies, and token-based economics.[4]This is distinct fromTim Berners-Lee's concept of theSemantic Web. Some technologists and journalists have contrasted it withWeb 2.0, in which they sayuser-generated contentis controlled by a small group of companies referred to asBig Tech.[5]The term "web3" was coined in 2014 byEthereumco-founderGavin Wood, and the idea gained interest in 2021 fromcryptocurrencyenthusiasts, large technology companies, andventure capitalfirms.[5][6]The concepts of web3 were first represented in 2013.[7][8] Critics have expressed concerns over thecentralization of wealthto a small group of investors and individuals,[9]or a loss of privacy due to more expansive data collection.[10]Billionaires likeElon MuskandJack Dorseyhave argued that web3 only serves as abuzzwordor marketing term.[11][12][13] Web 1.0andWeb 2.0refer to eras in thehistory of the World Wide Webas it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most sites consisted ofstatic pages, and the vast majority of users were consumers, not producers of content.[14][15]Web 2.0is based around the idea of "the web as platform"[16]and centers on user-created content uploaded toforums,social mediaand networking services,blogs, andwikis, among other services.[17]Web 2.0 is generally considered to have begun around 2004 and continues to the current day.[16][18][5] Web3 is distinct fromTim Berners-Lee's 1999 concept of aSemantic Web, which was also sometimes referred to as Web 3.0.[19]While the Semantic Web envisioned a web of linked data, web3 in theblockchaincontext refers to a decentralized internet built upon distributed ledger technologies.[20]Some writers referring to the decentralized concept usually known as "web3" have used the term "Web 3.0", leading to some confusion between the two concepts.[21][22]Furthermore, some visions of web3 also incorporate ideas relating to the semantic web.[23][24] The term "web3" was coined byPolkadotfounder andEthereumco-founderGavin Woodin 2014, referring to a "decentralized online ecosystem based on blockchain."[1]In 2021, the idea of web3 gained popularity.[25]Particular interest spiked toward the end of 2021, largely due to interest fromcryptocurrencyenthusiasts and investments from high-profile technologists and companies.[5][6]Executives from venture capital firmAndreessen Horowitztraveled toWashington, DC, in October 2021 to lobby for the idea as a potential solution to questions about regulation of the web, with which policymakers have been grappling.[26] Specific visions for web3 differ, and the term has been described by Olga Kharif as "hazy", but they revolve around the idea of decentralization and often incorporate blockchain technologies, such as variouscryptocurrenciesandnon-fungible tokens(NFTs).[5]Kharif has described web3 as an idea that "would build financial assets, in the form of tokens, into the inner workings of almost anything you do online".[27]A policy brief published by theBennett Institute for Public Policyat theUniversity of Cambridgedefined web3 as "the putative next generation of the web's technical, legal, and payments infrastructure—including blockchain,smart contractsand cryptocurrencies."[28] Some visions are based around the concept ofdecentralized autonomous organizations(DAOs).[29]Decentralized finance(DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[5]Self-sovereign identityallows users to identify themselves without relying on an authentication system such asOAuth, in which a trusted party has to be reached in order to assess identity.[30] Academic researchers, such as Tomer J. Chaffer and Justin Goldston in 2022, have described web3 as a possible solution to concerns about the over-centralization of the web in a few "Big Tech" companies.[31][5][26]Some have expressed the notion that web3 could improvedata security,scalability, andprivacybeyond what is currently possible withWeb 2.0platforms.[32]Bloombergstates that skeptics say the idea "is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders".[27]The New York Timesreported that several investors are betting $27billion that web3 "is the future of the internet".[33][34] Some Web 2.0 companies, includingRedditandDiscord, have explored incorporating web3 technologies into their platforms.[5][35]On November 8, 2021, CEO Jason Citron tweeted a screenshot suggesting Discord might be exploring integrating cryptocurrency wallets into their platform. Two days later, and after heavy user backlash,[35][36]Discord announced they had no plans to integrate such technologies and that it was an internal-only concept that had been developed in a company-widehackathon.[36] Some legal scholars quoted byThe Conversationhave expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to preventcybercrime,online harassment,hate speech, and the dissemination ofchild pornography.[37]But, the news website also states that, "[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures". Some other critics of web3 see the concept as a part of acryptocurrency bubble, or as an extension ofblockchain-based trends that they see as overhyped or harmful, particularlyNFTs.[35]Some critics have raised concerns about theenvironmental impact of cryptocurrenciesand NFTs.[6]Cryptocurrencies vary in efficiency, withproof of stakehaving been designed to be less energy intensive than the more widely usedproof of work, although there is disagreement about how secure and decentralized this is in practice.[38][39][40][41]Others have expressed beliefs that web3 and the associated technologies are apyramid scheme.[6] Jack Dorsey, co-founder and former CEO ofTwitter, dismissed web3 as a "venture capitalists' plaything".[42]Dorsey opined that web3 will not democratize the internet, but it will shiftpowerfrom players likeFacebookto venture capital funds likeAndreessen Horowitz.[9] Liam Proven, writing forThe Register, concludes that web3 is "a myth, a fairy story. It's what parents tell their kids about at night if they want them to grow up to become economists".[43] In 2021,SpaceXandTeslaCEOElon Muskexpressed skepticism about web3 in a tweet, saying that web3 "seems more marketing buzzword than reality right now."[11] In November 2021, James Grimmelmann ofCornell Universityreferred to web3 asvaporware, calling it "a promised future internet that fixes all the things people don't like about the current internet, even when it's contradictory." Grimmelmann also argued that moving the internet toward a blockchain-focused infrastructure would centralize and cause moredata collectioncompared to the current internet.[10] Software engineer Stephen Diehl described web3 in a blog post as a "vapid marketing campaign that attempts to reframe the public's negative associations of crypto assets into a false narrative about disruption of legacy tech company hegemony."[44] Kevin Werbach, author ofThe Blockchain and the New Architecture of Trust,[45]has said that "many so-called 'Web 3.0' solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market", adding that this "may change, but it's not a given that all these limitations will be overcome".[46] In early 2022,Moxie Marlinspike, creator ofSignal, articulated how web3 is not as decentralized as it appears to be, mainly due to consolidation in the cryptocurrency field, including in blockchainapplication programming interfaceswhich are currently mainly controlled by the companiesAlchemyandInfura;cryptocurrency exchangeswhich are mainly dominated byBinance,Coinbase,MetaMask, andOpenSea; and thestablecoinmarket which is currently dominated byTether. Marlinspike also remarked that the new web resembles the old web.[47][48][49]
https://en.wikipedia.org/wiki/Web3
Acausal mapcan be defined as a network consisting of links or arcs between nodes or factors, such that a link between C and E means, in some sense, that someone believes or claims C has or had some causal influence on E. This definition could cover diagrams representing causal connections between variables which are measured in a strictly quantitative way and would therefore also include closely related statistical models likeStructural Equation Models[1]andDirected Acyclic Graphs(DAGs).[2]However the phrase “causal map” is usually reserved for qualitative or merely semi-quantitative maps. In this sense, causal maps can be seen as a type of concept map. Systems diagrams and Fuzzy Cognitive Maps[3]also fall under this definition. Causal maps have been used since the 1970’s by researchers and practitioners in a range of disciplines from management science[4]to ecology,[5]employing a variety of methods. They are used for many purposes, for example: Different kinds of causal maps can be distinguished particularly by the kind of information which can be encoded by the links and nodes. One important distinction is to what extent the links are intended to encode causation or (somebody’s) belief about causation. Causal mapping is the process of constructing, summarising and drawing inferences from a causal map, and more broadly can refer to sets of techniques for doing this. While one group of such methods is actually called “causal mapping”, there are many similar methods which go by a wide variety of names. The phrase “causal mapping” goes back at least to Robert Axelrod,[7]based in turn on Kelly’s personal construct theory .[14]The idea of wanting to understand the behaviour of actors in terms of internal ‘maps’ of the word which they carry around with them goes back further, to Kurt Lewin[15]and the field theorists.[16]Causal mapping in this sense is loosely based on "concept mapping" and “cognitive mapping”, and sometimes the three terms are used interchangeably, though the latter two are usually understood to be broader, including maps in which the links between factors are not necessarily causal and are therefore not causal maps. Literature on the theory and practice of causal mapping includes a few canonical works[7]as well as book-length interdisciplinary overviews,[17][18]and guides to particular approaches.[19] Insoftware testing, acause–effect graphis adirected graphthat maps a set of causes to a set of effects. The causes may be thought of as the input to the program, and the effects may be thought of as the output. Usually the graph shows the nodes representing the causes on the left side and the nodes representing the effects on the right side. There may be intermediate nodes in between that combine inputs using logical operators such as AND and OR. Constraints may be added to the causes and effects. These are represented as edges labeled with the constraint symbol using a dashed line. For causes, valid constraint symbols are E (exclusive), O (one and only one), I (at least one), and R (Requires). The exclusive constraint states that at most one of the causes 1 and 2 can be true, i.e. both cannot be true simultaneously. The Inclusive (at least one) constraint states that at least one of the causes 1, 2 or 3 must be true, i.e. all cannot be false simultaneously. The one and only one (OaOO or simply O) constraint states that only one of the causes 1, 2 or 3 must be true. The Requires constraint states that if cause 1 is true, then cause 2 must be true, and it is impossible for 1 to be true and 2 to be false. For effects, valid constraint symbol is M (Mask). The mask constraint states that if effect 1 is true then effect 2 is false. Note that the mask constraint relates to the effects and not the causes like the other constraints. The graph's direction is as follows: The graph can always be rearranged so there is only one node between any input and any output. Seeconjunctive normal formanddisjunctive normal form. A cause–effect graph is useful for generating a reduceddecision table. •List of Causal Mapping Software
https://en.wikipedia.org/wiki/Cause%E2%80%93effect_graph
Personalized marketing,also known asone-to-one marketingorindividual marketing,[1]is amarketingstrategy by which companies usedata analysisand digital technology to show adverts to individuals based on their perceived characteristics and interests. Marketers use methods fromdata collection,analytics,digital electronics, anddigital economicsthen use technology to analyze it and show personalized ads based on algorithms that attempt to deduce people’s interests. Personalized marketing is dependent on many different types of technology fordata collection,data classification,data analysis,data transfer, and datascalability. Technology enablesmarketingprofessionals to collect first-party data such as gender, age group, location, and income, as well as connect them with third-party data such asclick-through ratesof onlinebanner adsandsocial mediaparticipation. Data Management Platforms: A data management platform[2](DMP) is acentralized computingsystem for collecting, integrating and managing large sets ofstructuredandunstructured datafrom disparate sources. Personalized marketing enabled by DMPs, is sold to advertisers with the goal of having consumers receive relevant, timely, engaging, and personalized messaging andadvertisementsthat resonate with their unique needs and wants.[2]Growing number of DMP software options are available includingAdobe SystemsAudience Manager and Core Audience (Marketing Cloud) toOracle-acquiredBlueKai,SitecoreExperience Platform and X+1[3]Customer Relationship Management Platforms:Customer relationship management(CRM) is used by companies to manage and analyze customer interactions and data throughout the customer lifecycle, improving relationships, boosting retention, and driving sales growth. CRM systems are designed to compile information on customers across different channels (points of contact between the customer and the company) which could include the company'swebsite,live support,direct mail, marketing materials and social media. CRM systems can also give customer-facing staff detailed information on customers' personal information, purchase history, buying preferences and concerns.[4]Most popular enterprise CRM applications areSalesforce.com,Microsoft Dynamics CRM,NetSuite,Hubspot, and Oracle Eloqua. Beacon Technology: Beacon technology works onBluetooth low energy (BLE)which is used by a low frequency chip that is found in devices like mobile phones. These chips communicate with multiple Beacon devices to form a network and are used by marketers to better personalize the messaging and mobile ads based on the customer's proximity to their retail outlet.[5]Beacon technology circumference has shrunk, ultimately facilitating its use.[5] One-to-one marketing[6]refers to marketing strategies applied directly to a specific consumer. Having knowledge of the consumer's preferences, enables suggesting specific products and promotions to each consumer. One-to-one marketing is based on four main steps in order to fulfill its goals: identify, differentiate, interact, and customize.[7] The goal of personalized marketing includes improving thecustomer experienceby delivering customized interactions and offers, ultimately leading to increasedcustomer loyalty. By understanding individualized consumer needs, a brand can create personalized ads and products that effectively target their desired consumers, fostering satisfaction. Personalized marketing aims to create consumer satisfaction, drivingbrand loyaltyand repeat business.[8] Personalized marketing is used by businesses to engage inpersonalized pricingwhich is a form ofprice discrimination. Personalized marketing is being adopted in one form or another by many different companies because of the benefits it brings for both the businesses and their customers. Described below are the costs and benefits of personalized marketing for businesses and customers: Prior to theInternet, businesses faced challenges in measuring the success of theirmarketing campaigns. A campaign would be launched, and even if there was a change in revenue, it was often challenging to determine what impact the campaign had on the change. Personalized marketing allows businesses to learn more about customers based ondemographic, contextual, andbehavioraldata. This behavioral data, as well as being able to track consumers’ habits, allows firms to better determine whatadvertising campaignsand marketing efforts are bringing customers in and what demographics they are influencing.[9]This allows firms to drop efforts that are ineffective, as well as put more money into the techniques that are bringing in customers.[10] Some personalized marketing can also be automated, increasing the efficiency of a business's marketing strategy. For example, an automatedemailcould be sent to a user shortly after an order is placed, giving suggestions for similar items or accessories that may help the customer better use the product he or she ordered, or amobile appcould send a notification about relevant deals to a customer when he or she is close to a store.[11] Consumers are presented with a wide range of products and services to choose from. A single retail website may offer a large variety of products, and few have the time inclination to browse through everything retailers have to offer. At the same time, customers expect ease and convenience in their shopping experience. In a recent survey, 74% of consumers said they get frustrated when websites have content, offers, ads, and promotions that have nothing to do with them. Many even expressed that they would leave a site if the marketing on the site was the opposite of their tastes, such as prompts to donate to a political party they dislike, or ads for a dating service when the visitor to the site is married. In addition, the top two reasons customers unsubscribe from marketingemailing listsare 1) they receive too many emails and 2) the content of the emails is not relevant to them.[12] Personalized marketing helps to bridge the gap between the vastness of what is available and the needs of customers for streamlined shopping experience. By providing a customized experience for customers, frustrations of purchase choices may be avoided. Customers may be able to find what they are looking for more efficiently, reducing the time spent searching through unrelated content and products. Consumers have become accustomed to this type of user experience that caters to their interests, and companies that have created ultra-customized digital experiences, such asAmazon[13]andNetflix.[14] Personalized marketing is gaining headway and has become a point of popular interest with the emergence of relevant and supportive technologies likeData Management Platform,geotargeting, and various forms ofsocial media. Now, it is believed to be an inevitable baseline for the future of marketing strategy and for future business success in competitive markets. Adapt to technology:Companies must adapt to relevant technologies in order for personalized marketing to be implemented. They may need to familiarize themselves with forms of social media, data-gathering platforms, and other technologies. Companies have access tomachine learning,big dataandAIthat automate personalization processes.[15] Restructuring current business models:Time and resources are necessary to adopt new marketing systems tailored to the most relevant technologies. Organized planning, communication and restructuring within businesses are essential to successfully implement personalized marketing. Personalized marketing prompts businesses to consider customer data and relevant outside information. Companydatabasesare filled with expansive personal information, such as individuals' geographic locations and potential buyers’ past purchases, which raises concerns about how that information is gathered, circulated internally and externally, and used to increase profits.[16] Legal liabilities:To address concerns about sensitive information being gathered and utilized without obvious consumer consent, liabilities and legalities have to be set and enforced. To prevent anyprivacyissues, companies manage legal hurdles before personalized marketing is adopted.[17]Specifically, theEUhas passed rigid regulation, known asGDPR, that limits what kind of data marketers can collect on their users, and provide ways in which consumers can suit companies for violation of their privacy. In the US,Californiahas followed suit and passed theCCPAin 2018.[18] Algorithmsgenerate data by analyzing and associating it with user preferences, such asbrowsing historyand personal profiles. Rather than discovering new facts or perspectives, one will be presented with similar or adjoining concepts ("filter bubble"). Some consider this exploitation of existing ideas rather than discovery of new ones.[19]Presenting someone with only personalized content may also exclude other, unrelated news or information that might in fact be useful to the user.[19] Algorithms may also be flawed. In February 2015,Coca-Colaran into trouble over an automated, algorithm-generated bot created for advertising purposes.Gawker’s editorial labs director, Adam Pash, created aTwitterbot @MeinCoke and set it up to tweet lines fromMein Kampfand then link to them with Coca-Cola’s campaign #MakeItHappy. This resulted in Coca-Cola’s Twitter feed broadcasting big chunks ofAdolf Hitler’s text.[20]In November 2014, theNew England Patriotswere forced to apologize after an automatic, algorithm-generated bot was tricked into tweeting a racial slur from the official team account.[21] Personalized marketing has proven most effective in interactive media, particularly on the internet. A website has the ability to track a customer's interests and make suggestions based on the collected data. Many sites help customers make choices by organizing information and prioritizing it based on the individual's liking. In some cases, the product itself can be customized using aconfiguration system.[22] The business movement during Web 1.0 leveraged database technology for targeting products, ads, and services to specific users with particular profile attributes. The concept was supported by technologies such as BroadVision, ATG, and BEA.Amazonis a classic example of a company that performs "One to One Marketing" by offering users targeted offers and related products.[23] The term "one-to-one marketing" refers to personalized marketing behavior towards an individual based on received data. Due to its nature, "one-to-one marketing" is often referred to as relationship marketing. This type of marketing creates a personalized relationships with individual consumers.[24] McKinseyidentified 4 problems that prevent companies from implementing large scale personalizations:[25]
https://en.wikipedia.org/wiki/Personalized_marketing
Integrated injection logic(IIL,I2L, orI2L) is a class ofdigital circuitsbuilt with multiple collectorbipolar junction transistors(BJT).[1]When introduced it had speed comparable toTTLyet was almost as low power asCMOS, making it ideal for use inVLSI(and larger)integrated circuits. The gates can be made smaller with this logic family than with CMOS because complementary transistors are not needed. Although the logic voltage levels are very close (High: 0.7V, Low: 0.2V), I2L has high noise immunity because it operates by current instead of voltage. I2L was developed in 1971 bySiegfried K. WiedmannandHorst H. Bergerwho originally called itmerged-transistor logic(MTL).[2]A disadvantage of this logic family is that the gates draw power when not switching unlike with CMOS. The I2L inverter gate is constructed with aPNPcommon base current source transistor and anNPNcommon emitter open collector inverter transistor (i.e. they are connected to the GND). On a wafer, these two transistors are merged. A small voltage (around 1 volts) is supplied to the emitter of the current source transistor to control the current supplied to the inverter transistor. Transistors are used for current sources on integrated circuits because they are much smaller than resistors. Because the inverter is open collector, awired AND operationmay be performed by connecting an output from each of two or more gates together. Thus thefan-outof an output used in such a way is one. However, additional outputs may be produced by adding more collectors to the inverter transistor. The gates can be constructed very simply with just a single layer of interconnect metal. In a discrete implementation of an I2L circuit, bipolar NPN transistors with multiple collectors can be replaced with multiple discrete 3-terminal NPN transistors connected in parallel having their bases connected together and their emitters connected likewise. The current source transistor may be replaced with a resistor from the positive supply to the base of the inverter transistor, since discrete resistors are smaller and less expensive than discrete transistors. Similarly, the merged PNP current injector transistor and the NPN inverter transistor can be implemented as separate discrete components. The heart of an I2L circuit is the common emitter open collector inverter. Typically, an inverter consists of an NPN transistor with the emitter connected to ground and the base biased with a forwardcurrentfrom the current source. The input is supplied to the base as either a current sink (low logic level) or as a high-z floating condition (high logic level). The output of an inverter is at the collector. Likewise, it is either a current sink (low logic level) or a high-z floating condition (high logic level). Likedirect-coupled transistor logic, there is no resistor between the output (collector) of one NPN transistor and the input (base) of the following transistor. To understand how the inverter operates, it is necessary to understand the current flow. If the bias current is shunted to ground (low logic level), the transistor turns off and the collector floats (high logic level). If the bias current is not shunted to ground because the input is high-z (high logic level), the bias current flows through the transistor to the emitter, switching on the transistor, and allowing the collector to sink current (low logic level). Because the output of the inverter can sink current but cannot source current, it is safe to connect the outputs of multiple inverters together to form a wired AND gate. When the outputs of two inverters are wired together, the result is a two-input NOR gate because the configuration (NOT A) AND (NOT B) is equivalent to NOT (A OR B) (perDe Morgan's Theorem). Finally the output of the NOR gate is inverted by IIL inverter in upper right of the diagram, the result is a two-input OR gate. Due to internal parasitic capacitance in transistors, higher currents sourced into the base of the inverter transistor result in faster switching speeds, and since the voltage difference between high and low logic levels is smaller for I2L than other bipolar logic families (around 0.5 volts instead of around 3.3 or 5 volts), losses due to charging and discharging parasitic capacitances are minimized. I2L is relatively simple to construct on anintegrated circuit, and was commonly used before the advent ofCMOSlogic by companies such asMotorola(nowNXP Semiconductors)[3]andTexas Instruments. In 1975,Sinclair Radionicsintroduced one of the first consumer-grade digital watches, theBlack Watch, which used I2L technology.[4]In 1976, Texas Instruments introducedSBP0400CPU which used I2L technology. In the late 1970s, RCA used I²L in its CA3162 ADC 3 digit meter integrated circuit. In 1979, HP introduced a frequency measurement instrument based on a HP-made custom LSI chip that uses integrated injection logic (I2L) forlow power consumptionand high density, enabling portable battery operation, and also some emitter function logic (EFL) circuits where high speed is needed in its HP 5315A/B.[5]
https://en.wikipedia.org/wiki/Integrated_injection_logic
Hexadecimal(also known asbase-16or simplyhex) is apositional numeral systemthat represents numbers using aradix(base) of sixteen. Unlike thedecimalsystem representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation ofbinary-codedvalues. Each hexadecimal digit represents fourbits(binary digits), also known as anibble(or nybble).[1]For example, an 8-bitbyteis two hexadecimal digits and its value can be written as00toFFin hexadecimal. In mathematics, a subscript is typically used to specify the base. For example, the decimal value711would be expressed in hexadecimal as 2C716. In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix0xis used inC, which would denote this value as0x2C7. Hexadecimal is used in the transfer encodingBase 16, in which each byte of theplain textis broken into two 4-bit values and represented by two hexadecimal digits. In most current use cases, the letters A–F or a–f represent the values 10–15, while thenumerals0–9 are used to represent their decimal values. There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Someseven-segment displaysuse mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9. There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the followinghex dump, each 8-bitbyteis a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number. In contexts where thebaseis not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910is decimal 159; 15916is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimaland 159hex, or 159dand 159h. Donald Knuthintroduced the use of a particular typeface to represent a particular radix in his bookThe TeXbook.[2]Hexadecimal representations are written there in atypewriter typeface:5A3,C1F27ED In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: Sometimes the numbers are known to be Hex. The use of the lettersAthroughFto represent the digits above 9 was not universal in the early history of computers. Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using theNATO phonetic alphabet, theJoint Army/Navy Phonetic Alphabet, or a similarad-hocsystem. In the wake of the adoption of hexadecimal amongIBM System/360programmers, Magnuson (1968)[23]suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc.[23]Another naming-system was published online by Rogers (2007)[24]that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke inSilicon Valley.[25]The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016.[26] Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15. Systems of counting ondigitshave been devised for both binary and hexadecimal.Arthur C. Clarkesuggested using each finger as an on/off bit, allowing finger counting from zero to 102310on ten fingers.[27]Another system for counting up to FF16(25510) is illustrated on the right. The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910and so on. Hexadecimal can also be used to express the exact bit patterns used in theprocessor, so a sequence of hexadecimal digits may represent asignedor even afloating-pointvalue. This way, the negative number −4210can be written as FFFF FFD6 in a 32-bitCPU register(intwo's complement), as C228 0000 in a 32-bitFPUregister or C045 0000 0000 0000 in a 64-bit FPU register (in theIEEE floating-point standard). Just as decimal numbers can be represented inexponential notation, so too can hexadecimal numbers.P notationuses the letterP(orp, for "power"), whereasE(ore) serves a similar purpose in decimalE notation. The number after thePisdecimaland represents thebinaryexponent. Increasing the exponent by 1 multiplies by 2, not 16:20p0 = 10p1 = 8p2 = 4p3 = 2p4 = 1p5. Usually, the number is normalized so that the hexadecimal digits start with1.(zero is usually0with noP). Example:1.3DEp42represents1.3DE16× 24210. P notation is required by theIEEE 754-2008binary floating-point standard and can be used for floating-point literals in theC99edition of theC programming language.[28]Using the%aor%Aconversion specifiers, this notation can be produced by implementations of theprintffamily of functions following the C99 specification[29]andSingle Unix Specification(IEEE Std 1003.1)POSIXstandard.[30] Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112to base ten. Since eachpositionin a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right: Therefore: With little practice, mapping 11112to F16in one step becomes easy (see table inwritten representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit.[31] This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results. Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly: The conversion from hexadecimal to binary is equally direct.[31] Althoughquaternary(base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16= 02 11 304. Theoctal(base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four. As with all bases there is a simplealgorithmfor converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method. Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1be the hexadecimal digits representing the number. "16" may be replaced with any other base that may be desired. The following is aJavaScriptimplementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work withbitwise operators. It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation. For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p(pbeing the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that: B3AD = (11 × 163) + (3 × 162) + (10 × 161) + (13 × 160) which is 45997 in base 10. Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal. InMicrosoft Windows, theCalculator, on its Programmer mode, allows conversions between hexadecimal and other common programming bases. Elementary operations such as division can be carried out indirectly through conversion to an alternatenumeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits. Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such aslong divisionand the traditional subtraction algorithm. As with other numeral systems, the hexadecimal system can be used to representrational numbers, althoughrepeating expansionsare common since sixteen (1016) has only a single prime factor: two. For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two forbinaryor dividing one by sixteen for hexadecimal, both of these fractions are written as0.1. Because the radix 16 is aperfect square(42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are nocyclic numbers(other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has aprime factornot found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not apower of tworesult in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient thandecimalfor representing rational numbers since a larger proportion lies outside its range of finite representation. All rational numbers finitely representable in hexadecimal are also finitely representable in decimal,duodecimalandsexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.19in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510(one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560. The table below gives the expansions of some commonirrational numbersin decimal and hexadecimal. Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below. The traditionalChinese units of measurementwere base-16. For example, one jīn (斤) in the old system equals sixteentaels. Thesuanpan(Chineseabacus) can be used to perform hexadecimal calculations such as additions and subtractions.[32] As with theduodecimalsystem, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals.[33]Some proposals unify standard measures so that they are multiples of 16.[34][35]An early such proposal was put forward byJohn W. NystrominProject of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862.[36]Nystrom among other things suggestedhexadecimal time, which subdivides a day by 16, so that there are 16 "hours" (or "10tims", pronouncedtontim) in a day.[37] The wordhexadecimalis first recorded in 1952.[38]It ismacaronicin the sense that it combinesGreekἕξ (hex) "six" withLatinate-decimal. The all-Latin alternativesexadecimal(compare the wordsexagesimalfor base 60) is older, and sees at least occasional use from the late 19th century.[39]It is still in use in the 1950s inBendixdocumentation. Schwartzman (1994) argues that use ofsexadecimalmay have been avoided because of its suggestive abbreviation tosex.[40]Many western languages since the 1960s have adopted terms equivalent in formation tohexadecimal(e.g. Frenchhexadécimal, Italianesadecimale, Romanianhexazecimal, Serbianхексадецимални, etc.) but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandicsextándakerfi, Russianшестнадцатеричнойetc.) Terminology and notation did not become settled until the end of the 1960s. In 1969,Donald Knuthargued that the etymologically correct term would besenidenary, or possiblysedenary, a Latinate term intended to convey "grouped by 16" modelled onbinary,ternary,quaternary, etc. According to Knuth's argument, the correct terms fordecimalandoctalarithmetic would bedenaryandoctonary, respectively.[41]Alfred B. Taylor usedsenidenaryin his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits".[42][43] The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the publication of theFortran IVmanual forIBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants.[44]As noted above, alternative notations were used byNEC(1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin in his letter to the editor of theCACMcomplains that With the ridiculous choice of letters A, B, C, D, E, F as hexadecimal number symbols adding to already troublesome problems of distinguishing octal (or hex) numbers from decimal numbers (or variable names), the time is overripe for reconsideration of our number symbols. This should have been done before poor choices gelled into a de facto standard! Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme": "Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system".[19]He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (asBrahmi numerals, and later in aHindu–Arabic numeral system), and that the recentASCIIstandards (ASA X3.4-1963 and USAS X3.4-1968) "should have preserved six code table positions following the ten decimal digits -- rather than needlessly filling these with punctuation characters" (":;<=>?") that might have been placed elsewhere among the 128 available positions. Base16(as a proper name without a space) can also refer to abinary to text encodingbelonging to the same family asBase32,Base58, andBase64. In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from theASCIIcharacter set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers. There are several advantages of Base16 encoding: The main disadvantages of Base16 encoding are: Support for Base16 encoding is ubiquitous in modern computing. It is the basis for theW3Cstandard forURL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers.
https://en.wikipedia.org/wiki/Hexadecimal
Computer security compromised by hardware failureis a branch ofcomputer securityapplied to hardware. The objective of computer security includes protection of information and property from theft, corruption, ornatural disaster, while allowing the information and property to remain accessible and productive to its intended users.[1]Such secret information could be retrieved by different ways. This article focus on the retrieval of data thanks to misused hardware or hardware failure. Hardware could be misused or exploited to get secret data. This article collects main types of attack that can lead to data theft. Computer security can be comprised by devices, such as keyboards, monitors or printers (thanks to electromagnetic or acoustic emanation for example) or by components of the computer, such as the memory, the network card or the processor (thanks to time or temperature analysis for example). The monitor is the main device used to access data on a computer. It has been shown that monitors radiate or reflect data on their environment, potentially giving attackers access to information displayed on the monitor. Video display units radiate: Known as compromising emanations orTEMPESTradiation, a code word for a U.S. government programme aimed at attacking the problem, the electromagnetic broadcast of data has been a significant concern in sensitive computer applications. Eavesdroppers can reconstruct video screen content from radio frequency emanations.[3]Each (radiated) harmonic of the video signal shows a remarkable resemblance to a broadcast TV signal. It is therefore possible to reconstruct the picture displayed on the video display unit from the radiated emission by means of a normal television receiver.[2]If no preventive measures are taken, eavesdropping on a video display unit is possible at distances up to several hundreds of meters, using only a normal black-and-white TV receiver, a directional antenna and an antenna amplifier. It is even possible to pick up information from some types of video display units at a distance of over 1 kilometer. If more sophisticated receiving and decoding equipment is used, the maximum distance can be much greater.[4] What is displayed by the monitor is reflected on the environment. The time-varying diffuse reflections of the light emitted by a CRT monitor can be exploited to recover the original monitor image.[5]This is an eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors. The technique exploits reflections of the screen's optical emanations in various objects that one commonly finds close to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. This attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close by building.[6] Many objects that may be found at a usual workplace can be exploited to retrieve information on a computer's display by an outsider.[7]Particularly good results were obtained from reflections in a user's eyeglasses or a tea pot located on the desk next to the screen. Reflections that stem from the eye of the user also provide good results. However, eyes are harder to spy on at a distance because they are fast-moving objects and require high exposure times. Using more expensive equipment with lower exposure times helps to remedy this problem.[8] The reflections gathered from curved surfaces on close by objects indeed pose a substantial threat to the confidentiality of data displayed on the screen. Fully invalidating this threat without at the same time hiding the screen from the legitimate user seems difficult, without using curtains on the windows or similar forms of strong optical shielding. Most users, however, will not be aware of this risk and may not be willing to close the curtains on a nice day.[9]The reflection of an object, a computer display, in a curved mirror creates a virtual image that is located behind the reflecting surface. For a flat mirror this virtual image has the same size and is located behind the mirror at the same distance as the original object. For curved mirrors, however, the situation is more complex.[10] Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes.[11]Electromagnetic emanations have turned out to constitute a security threat to computer equipment.[9]The figure below presents how a keystroke is retrieved and what material is necessary. The approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, four different kinds of compromising electromagnetic emanations have been detected, generated bywired and wirelesskeyboards. These emissions lead to a full or a partial recovery of the keystrokes. The best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls.[11]Because each keyboard has a specific fingerprint based on the clock frequency inconsistencies, it can determine the source keyboard of a compromising emanation, even if multiple keyboards from the same model are used at the same time.[12] The four different kinds way of compromising electromagnetic emanations are described below. When a key is pressed, released or held down, the keyboard sends a packet of information known as a scan code to the computer.[13]The protocol used to transmit these scan codes is a bidirectional serial communication, based on four wires: Vcc (5 volts), ground, data and clock.[13]Clock and data signals are identically generated. Hence, the compromising emanation detected is the combination of both signals. However, the edges of the data and the clock lines are not superposed. Thus, they can be easily separated to obtain independent signals.[14] The Falling Edge Transition attack is limited to a partial recovery of the keystrokes. This is a significant limitation.[15]The GTT is a falling edge transition attack improved, which recover almost all keystrokes. Indeed, between two traces, there is exactly one data rising edge. If attackers are able to detect this transition, they can fully recover the keystrokes.[15] Harmonics compromising electromagnetic emissions come from unintentional emanations such as radiations emitted by the clock, non-linear elements, crosstalk, ground pollution, etc. Determining theoretically the reasons of these compromising radiations is a very complex task.[16]These harmonics correspond to a carrier of approximately 4 MHz which is very likely the internal clock of the micro-controller inside the keyboard. These harmonics are correlated with both clock and data signals, which describe modulated signals (in amplitude and frequency) and the full state of both clock and data signals. This means that the scan code can be completely recovered from these harmonics.[16] Keyboard manufacturers arrange the keys in a matrix. The keyboard controller, often an 8-bit processor, parses columns one-by-one and recovers the state of 8 keys at once. This matrix scan process can be described as 192 keys (some keys may not be used, for instance modern keyboards use 104/105 keys) arranged in 24 columns and 8 rows.[17]These columns are continuously pulsed one-by-one for at least 3μs. Thus, these leads may act as an antenna and generate electromagnetic emanations. If an attacker is able to capture these emanations, he can easily recover the column of the pressed key. Even if this signal does not fully describe the pressed key, it still gives partial information on the transmitted scan code, i.e. the column number.[17] Note that the matrix scan routine loops continuously. When no key is pressed, we still have a signal composed of multiple equidistant peaks. These emanations may be used to remotely detect the presence of powered computers. Concerning wireless keyboards, the wireless data burst transmission can be used as an electromagnetic trigger to detect exactly when a key is pressed, while the matrix scan emanations are used to determine the column it belongs to.[17] Some techniques can only target some keyboards. This table sums up which technique could be used to find keystroke for different kind of keyboard. In their paper called"Compromising Electromagnetic Emanations of Wired and Wireless Keyboards", Martin Vuagnoux and Sylvain Pasini tested 12 different keyboard models, with PS/2, USB connectors and wireless communication in different setups: a semi-anechoic chamber, a small office, an adjacent office and a flat in a building. The table below presents their results. Attacks against emanations caused by human typing have attracted interest in recent years. In particular, works showed that keyboard acoustic emanations do leak information that can be exploited to reconstruct the typed text.[18] PC keyboards, notebook keyboards are vulnerable to attacks based on differentiating the sound emanated by different keys.[19]This attack takes as input an audio signal containing a recording of a single word typed by a single person on a keyboard, and a dictionary of words. It is assumed that the typed word is present in the dictionary. The aim of the attack is to reconstruct the original word from the signal.[20]This attack, taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters.[21]This attack is inexpensive because the other hardware required is a parabolic microphone and non-invasive because it does not require physical intrusion into the system. The attack employs a neural network to recognize the key being pressed.[19]It combines signal processing and efficient data structures and algorithms, to successfully reconstruct single words of 7-13 characters from a recording of the clicks made when typing them on a keyboard.[18]The sound of clicks can differ slightly from key to key, because the keys are positioned at different positions on the keyboard plate, although the clicks of different keys sound similar to the human ear.[19] On average, there were only 0.5 incorrect recognitions per 20 clicks, which shows the exposure of keyboard to the eavesdropping using this attack.[22]The attack is very efficient, taking under 20 seconds per word on a standard PC. A 90% or better success rate of finding the correct word for words of 10 or more characters, and a success rate of 73% over all the words tested.[18]In practice, a human attacker can typically determine if text is random. An attacker can also identify occasions when the user types user names and passwords.[23]Short audio signals containing a single word, with seven or more characters long was considered. This means that the signal is only a few seconds long. Such short words are often chosen as a password.[18]The dominant factors affecting the attack's success are the word length, and more importantly, the number of repeated characters within the word.[18] This is a procedure that makes it possible to efficiently uncover a word out of audio recordings of keyboard click sounds.[24]More recently, extracting information out of another type of emanations was demonstrated: acoustic emanations from mechanical devices such as dot-matrix printers.[18] While extracting private information by watching somebody typing on a keyboard might seem to be an easy task, it becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few characters per minute. The paper"ClearShot: Eavesdropping on Keyboard Input from Video"presents a novel approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user typing.[25] Automatically recognizing the keys being pressed by a user is a hard problem that requires sophisticated motion analysis. Experiments show that, for a human, reconstructing a few sentences requires lengthy hours of slow-motion analysis of the video.[26]The attacker might install a surveillance device in the room of the victim, might take control of an existing camera by exploiting a vulnerability in the camera's control software, or might simply point a mobile phone with an integrated camera at the laptop's keyboard when the victim is working in a public space.[26] Balzarotti's analysis is divided into two main phases (figure below). Thefirst phaseanalyzes the video recorded by the camera using computer vision techniques. For each frame of the video, the computer vision analysis computes the set of keys that were likely pressed, the set of keys that were certainly not pressed, and the position of space characters. Because the results of this phase of the analysis are noisy,a second phase, called the text analysis, is required. The goal of this phase is to remove errors using both language and context-sensitive techniques. The result of this phase is the reconstructed text, where each word is represented by a list of possible candidates, ranked by likelihood.[26] With acoustic emanations, an attack that recovers what a dot-matrix printer processing English text is printing is possible. It is based on a record of the sound the printer makes, if the microphone is close enough to it. This attack recovers up to 72% of printed words, and up to 95% if knowledge about the text are done, with a microphone at a distance of 10 cm from the printer.[27] After an upfront training phase ("a" in the picture below), the attack ("b" in the picture below) is fully automated and uses a combination of machine learning, audio processing, and speech recognition techniques, including spectrum features, Hidden Markov Models and linear classification.[5]The fundamental reason why the reconstruction of the printed text works is that, the emitted sound becomes louder if more needles strike the paper at a given time.[9]There is a correlation between the number of needles and the intensity of the acoustic emanation.[9] A training phase was conducted where words from a dictionary are printed and characteristic sound features of these words are extracted and stored in a database. The trained characteristic features was used to recognize the printed English text.[9]But, this task is not trivial. Major challenges include : Timing attacks enable an attacker to extract secrets maintained in a security system by observing the time it takes the system to respond to various queries.[28] SSHis designed to provide a secure channel between two hosts. Despite the encryption and authentication mechanisms it uses, SSH has weaknesses. In interactive mode, every individual keystroke that a user types is sent to the remote machine in a separate IP packet immediately after the key is pressed, which leaks the inter-keystroke timing information of users’ typing. Below, the picture represents the commandsuprocessed through a SSH connection. A very simple statistical techniques suffice to reveal sensitive information such as the length of users’ passwords or even root passwords. By using advanced statistical techniques on timing information collected from the network, the eavesdropper can learn significant information about what users type in SSH sessions.[29]Because the time it takes the operating system to send out the packet after the keypress is in general negligible comparing to the interkeystroke timing, this also enables an eavesdropper to learn the precise interkeystroke timings of users’ typing from the arrival times of packets.[30] Data remanence problems not only affect obvious areas such as RAM and non-volatile memory cells but can also occur in other areas of the device through hot-carrier effects (which change the characteristics of the semiconductors in the device) and various other effects which are examined alongside the more obvious memory-cell remanence problems.[31]It is possible to analyse and recover data from these cells and from semiconductor devices in general long after it should (in theory) have vanished.[32] Electromigration, which means to physically move the atom to new locations (to physically alter the device itself) is another type of attack.[31]It involves the relocation of metal atoms due to high current densities, a phenomenon in which atoms are carried along by an "electron wind" in the opposite direction to the conventional current, producing voids at the negative electrode and hillocks and whiskers at the positive electrode. Void formation leads to a local increase in current density and Joule heating (the interaction of electrons and metal ions to produce thermal energy), producing further electromigration effects. When the external stress is removed, the disturbed system tends to relax back to its original equilibrium state, resulting in a backflow which heals some of the electromigration damage. In the long term though, this can cause device failure, but in less extreme cases it simply serves to alter a device's operating characteristics in noticeable ways. For example, the excavations of voids leads to increased wiring resistance and the growth of whiskers leads to contact formation and current leakage.[33]An example of a conductor which exhibits whisker growth due to electromigration is shown in the figure below: One example which exhibits void formation (in this case severe enough to have led to complete failure) is shown in this figure: Contrary to popular assumption, DRAMs used in most modern computers retain their contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard.[34] Many products do cryptographic and other security-related computations using secret keys or other variables that the equipment's operator must not be able to read out or alter. The usual solution is for the secret data to be kept in volatile memory inside a tamper-sensing enclosure. Security processors typically store secret key material in static RAM, from which power is removed if the device is tampered with. At temperatures below −20 °C, the contents of SRAM can be ‘frozen’. It is interesting to know the period of time for which a static RAM device will retain data once the power has been removed. Low temperatures can increase the data retention time of SRAM to many seconds or even minutes.[35] Maximillian Dornseif presented a technique inthese slides, which let him take the control of an Apple computer thanks to an iPod. The attacks needed a first generic phase where the iPod software was modified so that it behaves as master on the FireWire bus. Then the iPod had full read/write access on the Apple Computer when the iPod was plugged into a FireWire port.[36]FireWire is used by : audio devices, printers, scanners, cameras, gps, etc. Generally, a device connected by FireWire has full access (read/write). Indeed, OHCI Standard (FireWire standard) reads : Physical requests, including physical read, physical write and lock requests to some CSR registers (section 5.5), are handled directly by the Host Controller without assistance by system software. So, any device connected by FireWire can read and write data on the computer memory. For example, a device can : or To increase the computational power, processors are generally equipped with acache memorywhich decreases the memory access latency. Below, the figure shows the hierarchy between the processor and the memory. First the processor looks for data in the cache L1, then L2, then in the memory. When the data is not where the processor is looking for, it is called a cache-miss. Below, pictures show how the processor fetch data when there are two cache levels. Unfortunately caches contain only a small portion of the application data and can introduce additional latency to the memory transaction in the case of a miss. This involves also additional power consumption which is due to the activation of memory devices down in the memory hierarchy. The miss penalty has been already used to attack symmetric encryption algorithms, like DES.[37]The basic idea proposed in this paper is to force a cache miss while the processor is executing the AES encryption algorithm on a known plain text.[38]The attacks allow an unprivileged process to attack other process running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization.[39] By carefully measuring the amount of time required to perform private key operations, attackers may be able to find fixedDiffie-Hellman exponents, factorRSAkeys, and break other cryptosystems. Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext.[40]The attack can be treated as a signal detection problem. The signal consists of the timing variation due to the target exponent bit, and noise results from measurement inaccuracies and timing variations due to unknown exponent bits. The properties of the signal and noise determine the number of timing measurements required to for the attack. Timing attacks can potentially be used against other cryptosystems, including symmetric functions.[41] A simple and generic processor backdoor can be used by attackers as a means to privilege escalation to get to privileges equivalent to those of any given running operating system.[42]Also, a non-privileged process of one of the non-privileged invited domain running on top of a virtual machine monitor can get to privileges equivalent to those of the virtual machine monitor.[42] Loïc Duflot studied Intel processors in the paper "CPU bugs, CPU backdoors and consequences on security"; he explains that the processor defines four different privilege rings numbered from 0 (most privileged) to 3 (least privileged). Kernel code is usually running in ring 0, whereas user-space code is generally running in ring 3. The use of some security-critical assembly language instructions is restricted to ring 0 code. In order to escalate privilege through the backdoor, the attacker must :[43] The backdoors Loïc Duflot presents are simple as they only modify the behavior of three assembly language instructions and have very simple and specific activation conditions, so that they are very unlikely to be accidentally activated.Recent inventionshave begun to target these types of processor-based escalation attacks.
https://en.wikipedia.org/wiki/Computer_security_compromised_by_hardware_failure
Inpopulation genetics, theWatterson estimatoris a method for describing thegenetic diversityin a population. It was developed byMargaret Wuand G. A. Watterson in the 1970s.[1][2]It is estimated by counting the number of polymorphic sites. It is a measure of the "population mutation rate" (the product of the effective population size and the neutral mutation rate) from the observed nucleotide diversity of a population.θ=4Neμ{\displaystyle \theta =4N_{e}\mu },[3]whereNe{\displaystyle N_{e}}is theeffective population sizeandμ{\displaystyle \mu }is the per-generationmutation rateof the population of interest (Watterson (1975)). The assumptions made are that there is a sample ofn{\displaystyle n}haploidindividuals from the population of interest with effective sizeNe{\displaystyle N_{e}}, thatn≪Ne{\displaystyle n\ll N_{e}}, and that there are infinitely many sites capable of varying (so that mutations never overlay or reverse one another). Because the number of segregating sites counted will increase with the number of sequences looked at, the correction factoran{\displaystyle a_{n}}is used. The estimate ofθ{\displaystyle \theta }, often denoted asθ^w{\displaystyle {\widehat {\theta \,}}_{w}}, is whereK{\displaystyle K}is the number of segregating sites (an example of a segregating site would be asingle-nucleotide polymorphism) in the sample and is the(n−1){\displaystyle (n-1)}thharmonic number. This estimate is based oncoalescent theory. Watterson's estimator is commonly used for its simplicity. When its assumptions are met, the estimator isunbiasedand thevarianceof the estimator decreases with increasing sample size or recombination rate. However, the estimator can be biased by population structure. For example,θ^w{\displaystyle {\widehat {\theta \,}}_{w}}is downwardly biased in anexponentially growingpopulation. It can also be biased by violation of the infinite-sites mutational model; if multiple mutations can overwrite one another, Watterson's estimator will be biased downward. Comparing the value of the Watterson's estimator, to nucleotide diversity is the basis of Tajima's D which allows inference of the evolutionary regime of a given locus.
https://en.wikipedia.org/wiki/Watterson_estimator
Asemantic decision tableuses modernontology engineeringtechnologies to enhance traditional adecision table. The term "semantic decision table" was coined by Yan Tang and Prof. Robert Meersman from VUB STARLab (Free University of Brussels) in 2006.[1]A semantic decision table is a set of decision tables properly annotated with an ontology. It provides a means to capture and examine decision makers’ concepts, as well as a tool for refining their decision knowledge and facilitating knowledge sharing in a scalable manner. A decision table is defined as a "tabular method of showing the relationship between a series of conditions and the resultant actions to be executed".[2]Following the de facto international standard (CSA, 1970), a decision table contains three building blocks: the conditions, the actions (or decisions), and the rules. Adecision conditionis constructed with acondition stuband acondition entry. Acondition stubis declared as a statement of a condition. Acondition entryprovides a value assigned to the condition stub. Similarly, anaction(ordecision) composes two elements: anaction stuband anaction entry. One states an action with an action stub. An action entry specifies whether (or in what order) the action is to be performed. A decision table separates the data (that is the condition entries and decision/action entries) from the decision templates (that are the condition stubs, decision/action stubs, and the relations between them). Or rather, a decision table can be a tabular result of its meta-rules. Traditional decision tables have many advantages compared to other decision support manners, such asif-then-elseprogramming statements,decision treesandBayesian networks. A traditional decision table is compact and easily understandable. However, it still has several limitations. For instance, a decision table often faces the problems ofconceptual ambiguityandconceptual duplication[citation needed]; and it istime consumingto create and maintainlargedecision tables[citation needed]. Semantic decision tables are an attempt to solve these problems. A semantic decision table is modeled based on the framework of Developing Ontology-Grounded Methods and Applications (DOGMA[3]). The separation of anontologyinto extremely simplelinguisticstructures (also known as lexons) and a layer of lexon constraints used by applications (also known as ontological commitments), aiming to achieve a degree ofscalability. According to the DOGMA framework, a semantic decision table consists of a layer of the decision binary fact types called semantic decision tablelexonsand a semantic decision table commitment layer that consists of the constraints and axioms of these fact types. Alexonl is a quintuple<y,t1,r1,r2,t1>{\displaystyle <y,t_{1},r_{1},r_{2},t_{1}>}wheret1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}represent two concepts in a natural language (e.g., English);r1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}(in,r1{\displaystyle r_{1}}corresponds to "role andr2{\displaystyle r_{2}}– refer to the relationships that the concepts share with respect to one another;γ{\displaystyle \gamma }is a context identifier refers to a context, which serves to disambiguate the termst1,t2{\displaystyle t_{1},t_{2}}into the intended concepts, and in which they become meaningful. For example, a lexon <γ, driver's license, is issued to, has, driver> explains a fact that “a driver’s license is issued to a driver”, and “a driver has a driver’s license”. Theontological commitmentlayer formally defines selected rules and constraints by which an application (or "agent") may make use of lexons. A commitment can contain various constraints, rules and axiomatized binary facts based on needs. It can be modeled in different modeling tools, such asobject-role modeling,conceptual graph, andUnified Modeling Language. A semantic decision table contains richer decision rules than a decision table. During the annotation process, the decision makers need to specify all the implicit rules, including the hidden decision rules and the meta-rules of a set of decision tables. The semantics of these rules is derived from an agreement between the decision makers observing the real-world decision problems. The process of capturing semantics within a community is a process of knowledge acquisition.
https://en.wikipedia.org/wiki/Semantic_decision_table
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Environmental sociologyis the study of interactions between societies and theirnatural environment. The field emphasizes the social factors that influenceenvironmental resource managementand causeenvironmental issues, the processes by which these environmental problems aresocially constructedand define associal issues, andsocietal responsesto these problems.[1] Environmental sociology emerged as asubfieldofsociologyin the late 1970s in response to the emergence of theenvironmental movementin the 1960s. It represents a relatively new area of inquiry focusing on an extension of earlier sociology through inclusion of physical context as related to social factors.[2] Environmental sociology is typically defined as thesociologicalstudy of socio-environmental interactions, although this definition immediately presents the problem of integrating human cultures with the rest of theenvironment.[3]Different aspects of human interaction with the natural environment are studied by environmental sociologists including population and demography, organizations and institutions, science and technology, health and illness, consumption and sustainability practices,[4]culture and identity,[5]and social inequality andenvironmental justice.[6]Although the focus of the field is the relationship between society and environment in general, environmentalsociologiststypically place special emphasis on studying the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. In addition, considerable attention is paid to the social processes by which certain environmental conditions become socially defined as problems. Most research in environmental sociology examines contemporary societies. Environmental sociology emerged as a coherent subfield of inquiry after theenvironmental movementof the 1960s and early 1970s. The works ofWilliam R. Catton, Jr.andRiley Dunlap,[7]among others, challenged the constrictedanthropocentrismof classical sociology. In the late 1970s, they called for a new holistic, or systems perspective, which lead to a marked shift in the field’s focus. Since the 1970s, general sociology has noticeably transformed to include environmental forces in social explanations. Environmental sociology has now solidified as a respected,interdisciplinaryfield of study inacademia.[8][9] The duality of the human condition rests with cultural uniqueness and evolutionary traits. From one perspective, humans are embedded in theecosphereand co-evolved alongside other species. Humans share the same basic ecological dependencies as other inhabitants of nature. From the other perspectives, humans are distinguished from other species because of their innovative capacities, distinct cultures and varied institutions.[10]Human creations have the power to independently manipulate, destroy, and transcend the limits of the natural environment.[11] According to Buttel (2004), there are five major traditions in environmental sociology today: the treadmill of production and other eco-Marxisms, ecological modernization and other sociologies of environmental reform, cultural-environmental sociologies, neo-Malthusianisms, and the new ecological paradigm.[12]In practice, this means five different theories of what to blame forenvironmental degradation, i.e., what to research or consider as important. These ideas are listed below in the order in which they were invented. Ideas that emerged later built on earlier ideas, and contradicted them.[citation needed] Works such as Hardin's "Tragedy of the Commons" (1969) reformulatedMalthusianthought about abstract population increases causing famines into a model of individual selfishness at larger scales causing degradation ofcommon pool resourcessuch as the air, water, the oceans, or general environmental conditions. Hardin offered privatization of resources or government regulation as solutions to environmental degradation caused by tragedy of the commons conditions. Many other sociologists shared this view of solutions well into the 1970s (see Ophuls). There have been many critiques of this view particularly political scientistElinor Ostrom, or economistsAmartya SenandEster Boserup.[13]Sociologists have developed a critical counter to Hardin's thesis calledThe Tragedy of the Commodity. Even though much of mainstream journalism considers Malthusianism the only view of environmentalism, most sociologists would disagree with Malthusianism since social organizational issues of environmental degradation are more demonstrated to cause environmental problems than abstract population or selfishness per se. For examples of this critique, Ostrom in her bookGoverning the Commons: The Evolution of Institutions for Collective Action(1990) argues that instead of self-interest always causing degradation, it can sometimes motivate people to take care of their common property resources. To do this they must change the basic organizational rules of resource use. Her research provides evidence for sustainable resource management systems, around common pool resources that have lasted for centuries in some areas of the world.[14] Amartya Sen argues in his bookPoverty and Famines: An Essay on Entitlement and Deprivation(1980) that population expansion fails to cause famines or degradation as Malthusians or Neo-Malthusians argue. Instead, in documented cases a lack of political entitlement to resources that exist in abundance, causes famines in some populations. He documents how famines can occur even in the midst of plenty or in the context of low populations. He argues that famines (and environmental degradation) would only occur in non-functioning democracies or unrepresentative states. Ester Boserup argues in her bookThe Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure(1965) from inductive, empirical case analysis that Malthus's more deductive conception of a presumed one-to-one relationship with agricultural scale and population is actually reversed. Instead of agricultural technology and scale determining and limiting population as Malthus attempted to argue, Boserup argued the world is full of cases of the direct opposite: that population changes and expands agricultural methods. Eco-Marxist scholarAllan Schnaiberg(below) argues against Malthusianism with the rationale that under larger capitalist economies, human degradation moved from localized, population-based degradation to organizationally caused degradation of capitalist political economies to blame. He gives the example of the organized degradation of rainforest areas which states and capitalists push people off the land before it is degraded by organizational means. Thus, many authors are critical of Malthusianism, from sociologists (Schnaiberg) to economists (Sen and Boserup), to political scientists (Ostrom), and all focus on how a country's social organization of its extraction can degrade the environment independent of abstract population. In the 1970s, the New Ecological Paradigm (NEP) conception critiqued the claimed lack of human-environmental focus in the classical sociologists and the sociological priorities their followers created. This was critiqued as the Human Exemptionalism Paradigm (HEP). The HEP viewpoint claims that human-environmental relationships were unimportant sociologically because humans are 'exempt' from environmental forces via cultural change. This view was shaped by the leadingWestern worldviewof the time and the desire for sociology to establish itself as an independent discipline against the then popular racist-biologicalenvironmental determinismwhere environment was all. In this HEP view, human dominance was felt to be justified by the uniqueness of culture, argued to be more adaptable than biological traits. Furthermore, culture also has the capacity to accumulate and innovate, making it capable of solving all natural problems. Therefore, as humans were not conceived of as governed by natural conditions, they were felt to have complete control of their own destiny. Any potential limitation posed by the natural world was felt to be surpassed using human ingenuity. Research proceeded accordingly without environmental analysis. In the 1970s, sociological scholars Riley Dunlap andWilliam R. Catton, Jr.began recognizing the limits of what would be termed the Human Excemptionalism Paradigm. Catton and Dunlap (1978) suggested a new perspective that took environmental variables into full account. They coined a new theoretical outlook for sociology, the New Ecological Paradigm, with assumptions contrary to HEP. The NEP recognizes the innovative capacity of humans, but says that humans are still ecologically interdependent as with other species. The NEP notes the power of social and cultural forces but does not professsocial determinism. Instead, humans are impacted by the cause, effect, and feedback loops of ecosystems. The Earth has a finite level of natural resources and waste repositories. Thus, the biophysical environment can impose constraints on human activity. They discussed a few harbingers of this NEP in 'hybridized' theorizing about topics that were neither exclusively social nor environmental explanations of environmental conditions. It was additionally a critique of Malthusian views of the 1960s and 1970s. Dunlap and Catton's work immediately received a critique from Buttel who argued to the contrary that classical sociological foundations could be found for environmental sociology, particularly in Weber's work on ancient "agrarian civilizations" and Durkheim's view of thedivision of laboras built on a material premise of specialization/specialization in response to material scarcity. This environmental aspect of Durkheim has been discussed by Schnaiberg (1971) as well. The Treadmill of Production is a theory coined and popularized by Schnaiberg as a way to answer for the increase in U.S. environmental degradation post World War II. At its simplest, this theory states that the more product or commodities are created, the more resources will be used, and the higher the impact will be.[15]The treadmill is a metaphor of being caught in the cycle of continuous growth which never stops, demanding more resources and as a result causing more environmental damage. In the middle of the HEP/NEP debate,neo-Marxistideas of conflict sociology were applied to environmental conflicts. Therefore, some sociologists wanted to stretch Marxist ideas of social conflict to analyze environmental social movements from the Marxist materialist framework instead of interpreting them as a cultural "New Social Movement", separate from material concerns. So "Eco-Marxism" was developed based on using neo-MarxistConflict theoriesconcepts of the relative autonomy of the state and applying them to environmental conflict.[citation needed] Two people following this school wereJames O'Connor(The Fiscal Crisis of the State, 1971) and later Allan Schnaiberg. Later, a different trend developed in eco-Marxism via the attention brought to the importance of metabolic analysis in Marx's thought byJohn Bellamy Foster. Contrary to previous assumptions that classical theorists in sociology all had fallen within a Human Exemptionalist Paradigm, Foster argued that Marx's materialism lead him to theorize labor as the metabolic process between humanity and the rest of nature.[16]In Promethean interpretations of Marx that Foster critiques, there was an assumption his analysis was very similar to the anthropocentric views critiqued by early environmental sociologists. Instead, Foster argued Marx himself was concerned about theMetabolic riftgenerated by capitalist society'ssocial metabolism, particularly in industrial agriculture—Marx had identified an "irreparable rift in the interdependent process of social metabolism,"[17]created by capitalist agriculture that was destroying the productivity of the land and creating wastes in urban sites that failed to be reintegrated into the land and thus lead toward destruction of urban workers health simultaneously.[18]Reviewing the contribution of this thread of eco-marxism to current environmental sociology, Pellow and Brehm conclude, "The metabolic rift is a productive development in the field because it connects current research to classical theory and links sociology with an interdisciplinary array of scientific literatures focused on ecosystem dynamics."[9] Foster emphasized that his argument presupposed the "magisterial work" ofPaul Burkett, who had developed a closely related "red-green" perspective rooted in a direct examination of Marx's value theory. Burkett and Foster proceeded to write a number of articles together on Marx's ecological conceptions, reflecting their shared perspective[19][20][21] More recently, Jason W. Moore, inspired by Burkett's value-analytical approach to Marx's ecology and arguing that Foster's work did not in itself go far enough, has sought to integrate the notion of metabolic rift with world systems theory, incorporating Marxian value-related conceptions.[22]For Moore, the modern world-system is a capitalist world-ecology, joining the accumulation of capital, the pursuit of power, and the production of nature in dialectical unity. Central to Moore's perspective is a philosophical re-reading of Marx's value theory, through which abstract social labor and abstract social nature are dialectically bound. Moore argues that the emergent law of value, from the sixteenth century, was evident in the extraordinary shift in the scale, scope, and speed of environmental change. What took premodern civilizations centuries to achieve—such as the deforestation of Europe in the medieval era—capitalism realized in mere decades. This world-historical rupture, argues Moore, can be explained through a law of value that regards labor productivity as the decisive metric of wealth and power in the modern world. From this standpoint, the genius of capitalist development has been to appropriate uncommodified natures—including uncommodified human natures—as a means of advancing labor productivity in the commodity system.[23] In 1975, the highly influential work of Allan Schnaiberg transfigured environmental sociology, proposing a societal-environmental dialectic, though within the 'neo-Marxist' framework of the relative autonomy of the state as well. This conflictual concept has overwhelming political salience. First, the economic synthesis states that the desire for economic expansion will prevail over ecological concerns. Policy will decide to maximize immediate economic growth at the expense of environmental disruption. Secondly, the managed scarcity synthesis concludes that governments will attempt to control only the most dire of environmental problems to prevent health and economic disasters. This will give the appearance that governments act more environmentally consciously than they really do. Third, the ecological synthesis generates a hypothetical case where environmental degradation is so severe that political forces would respond with sustainable policies. The driving factor would be economic damage caused by environmental degradation. The economic engine would be based on renewable resources at this point. Production and consumption methods would adhere to sustainability regulations.[24] These conflict-based syntheses have several potential outcomes. One is that the most powerful economic and political forces will preserve the status quo and bolster their dominance. Historically, this is the most common occurrence. Another potential outcome is for contending powerful parties to fall into a stalemate. Lastly, tumultuous social events may result that redistribute economic and political resources. In 1980, the highly influential work of Allan Schnaiberg entitledThe Environment: From Surplus to Scarcity(1980)[25][26][27]was a large contribution to this theme of a societal-environmental dialectic. By the 1980s, a critique of eco-Marxism was in the offing, given empirical data from countries (mostly in Western Europe like the Netherlands, Western Germany and somewhat the United Kingdom) that were attempting to wed environmental protection with economic growth instead of seeing them as separate. This was done through both state and capital restructuring. Major proponents of this school of research are Arthur P. J. Mol andGert Spaargaren. Popular examples of ecological modernization would be "cradle to cradle" production cycles,industrial ecology, large-scaleorganic agriculture,biomimicry,permaculture,agroecologyand certain strands ofsustainable development—all implying that economic growth is possible if that growth is well organized with the environment in mind.[citation needed] Reflexive modernizationThe many volumes of the German sociologistUlrich Beckfirst argued from the late 1980s that ourrisk societyis potentially being transformed by the environmental social movements of the world into structural change without rejecting the benefits of modernization and industrialization. This is leading to a form of 'reflexive modernization' with a world of reducedriskand better modernization process in economics, politics, and scientific practices as they are made less beholden to a cycle of protecting risk from correction (which he calls our state's organized irresponsibility)—politics creates ecodisasters, then claims responsibility in an accident, yet nothing remains corrected because it challenges the very structure of the operation of the economy and the private dominance of development, for example. Beck's idea of areflexive modernizationlooks forward to how our ecological and social crises in the late 20th century are leading toward transformations of the whole political and economic system's institutions, making them more "rational" with ecology in mind.[citation needed] Neoliberalismincludes deregulation, free market capitalism, and aims at reducing government spending. These neoliberal policies greatly affect environmental sociology. Since neoliberalism includes deregulation and essentially less government involvement, this leads to the commodification and privatization of unowned, state-owned, or common property resources. Diana Liverman and Silvina Vilas mention that this results in payments for environmental services; deregulation and cuts in public expenditure for environmental management; the opening up of trade and investment; and transfer of environmental management to local or nongovernmental institutions.[28]The privatization of these resources have impacts on society, the economy, and to the environment. An example that has greatly affected society is the privatization of water. Additionally, in the 1980s, with the rise of postmodernism in the western academy and the appreciation of discourse as a form of power, some sociologists turned to analyzing environmental claims as a form of social construction more than a 'material' requirement. Proponents of this school includeJohn A. Hannigan, particularly inEnvironmental Sociology: A Social Constructionist Perspective(1995). Hannigan argues for a 'soft constructionism' (environmental problems are materially real though they require social construction to be noticed) over a 'hard constructionism' (the claim that environmental problems are entirely social constructs). Although there was sometimes acrimonious debate between theconstructivistandrealist"camps" within environmental sociology in the 1990s, the two sides have found considerable common ground as both increasingly accept that while most environmental problems have a material reality they nonetheless become known only via human processes such as scientific knowledge,activists' efforts, and media attention. In other words, most environmental problems have a realontologicalstatus despite our knowledge/awareness of them stemming from social processes, processes by which various conditions are constructed as problems by scientists, activists, media and other social actors. Correspondingly, environmental problems must all be understood via social processes, despite any material basis they may have external to humans. This interactiveness is now broadly accepted, but many aspects of the debate continue in contemporary research in the field.[citation needed] United States The 1960s built strong cultural momentum for environmental causes, giving birth to the modern environmental movement and large questioning in sociologists interested in analyzing the movement. Widespread green consciousness moved vertically within society, resulting in a series of policy changes across many states in the U.S. and Europe in the 1970s. In the United States, this period was known as the "Environmental Decade" with the creation of theUnited States Environmental Protection Agencyand passing of theEndangered Species Act,Clean Water Act, and amendments to theClean Air Act.Earth Dayof 1970, celebrated by millions of participants, represented the modern age of environmental thought. The environmental movement continued with incidences such asLove Canal. While the current mode of thought expressed in environmental sociology was not prevalent until the 1970s, its application is now used in analysis of ancient peoples. Societies includingEaster Island, the Anaszi, and theMayanswere argued to have ended abruptly, largely due to poor environmental management. This has been challenged in later work however as the exclusive cause (biologically trainedJared Diamond'sCollapse(2005); or more modern work on Easter Island). The collapse of the Mayans sent a historic message that even advanced cultures are vulnerable to ecological suicide—though Diamond argues now it was less of a suicide than an environmental climate change that led to a lack of an ability to adapt—and a lack of elite willingness to adapt even when faced with the signs much earlier of nearing ecological problems. At the same time, societal successes for Diamond includedNew GuineaandTikopiaisland whose inhabitants have lived sustainably for 46,000 years.[citation needed] John Dryzeket al. argue inGreen States and Social Movements: Environmentalism in the United States, United Kingdom, Germany, and Norway(2003)[29]that there may be a common global green environmental social movement, though its specific outcomes are nationalist, falling into four 'ideal types' of interaction between environmental movements and state power. They use as their case studies environmental social movements and state interaction from Norway, the United Kingdom, the United States, and Germany. They analyze the past 30 years of environmentalism and the different outcomes that the green movement has taken in different state contexts and cultures.[citation needed] Recently and roughly in temporal order below, much longer-term comparative historical studies of environmental degradation are found by sociologists. There are two general trends: many employ world systems theory—analyzing environmental issues over long periods of time and space; and others employ comparative historical methods. Some utilize both methods simultaneously, sometimes without reference to world systems theory (like Whitaker, see below). Stephen G. Bunker(d. 2005) andPaul S. Ciccantellcollaborated on two books from aworld-systems theoryview, following commodity chains through history of the modern world system, charting the changing importance of space, time, and scale of extraction and how these variables influenced the shape and location of the main nodes of the world economy over the past 500 years.[30][31]Their view of the world was grounded in extraction economies and the politics of different states that seek to dominate the world's resources and each other through gaining hegemonic control of major resources or restructuring global flows in them to benefit their locations. The three volume work of environmental world-systems theory by Sing C. Chew analyzed how "Nature and Culture" interact over long periods of time, starting withWorld Ecological Degradation(2001)[32][33][34]In later books, Chew argued that there were three "Dark Ages" in world environmental history characterized by periods of state collapse and reorientation in the world economy associated with more localist frameworks of community, economy, and identity coming to dominate the nature/culture relationships after state-facilitated environmental destruction delegitimized other forms. Thus recreated communities were founded in these so-called 'Dark Ages,' novel religions were popularized, and perhaps most importantly to him the environment had several centuries to recover from previous destruction. Chew argues that modern green politics andbioregionalismis the start of a similar movement of the present day potentially leading to wholesale system transformation. Therefore, we may be on the edge of yet another global "dark age" which is bright instead of dark on many levels since he argues for human community returning with environmental healing as empires collapse. More case oriented studies were conducted by historical environmental sociologist Mark D. Whitaker analyzing China, Japan, and Europe over 2,500 years in his bookEcological Revolution(2009).[35]He argued that instead of environmental movements being "New Social Movements" peculiar to current societies, environmental movements are very old—being expressed via religious movements in the past (or in the present like inecotheology) that begin to focus on material concerns of health, local ecology, and economic protest against state policy and its extractions. He argues past or present is very similar: that we have participated with a tragic common civilizational process of environmental degradation, economic consolidation, and lack of political representation for many millennia which has predictable outcomes. He argues that a form of bioregionalism, the bioregional state,[36]is required to deal with political corruption in present or in past societies connected to environmental degradation. After looking at the world history of environmental degradation from very different methods, both sociologists Sing Chew and Mark D. Whitaker came to similar conclusions and are proponents of (different forms of) bioregionalism. Among the key journals in this field are:
https://en.wikipedia.org/wiki/Environmental_sociology
Ingraph theory, thehypercube graphQnis the graph formed from the vertices and edges of ann-dimensionalhypercube. For instance, thecube graphQ3is the graph formed by the 8 vertices and 12 edges of a three-dimensional cube.Qnhas2nvertices,2n− 1nedges, and is aregular graphwithnedges touching each vertex. The hypercube graphQnmay also be constructed by creating a vertex for eachsubsetof ann-element set, with two vertices adjacent when their subsets differ in a single element, or by creating a vertex for eachn-digitbinary number, with two vertices adjacent when their binary representations differ in a single digit. It is then-foldCartesian productof the two-vertexcomplete graph, and may be decomposed into two copies ofQn− 1connected to each other by aperfect matching. Hypercube graphs should not be confused withcubic graphs, which are graphs that have exactly three edges touching each vertex. The only hypercube graphQnthat is a cubic graph is the cubical graphQ3. The hypercube graphQnmay be constructed from the family ofsubsetsof asetwithnelements, by making a vertex for each possible subset and joining two vertices by an edge whenever the corresponding subsets differ in a single element. Equivalently, it may be constructed using2nvertices labeled withn-bitbinary numbersand connecting two vertices by an edge whenever theHamming distanceof their labels is one. These two constructions are closely related: a binary number may be interpreted as a set (the set of positions where it has a1digit), and two such sets differ in a single element whenever the corresponding two binary numbers have Hamming distance one. Alternatively,Qnmay be constructed from thedisjoint unionof two hypercubesQn− 1, by adding an edge from each vertex in one copy ofQn− 1to the corresponding vertex in the other copy, as shown in the figure. The joining edges form aperfect matching. The above construction gives a recursive algorithm for constructing theadjacency matrixof a hypercube,An. Copying is done via theKronecker product, so that the two copies ofQn− 1have an adjacency matrix12⊗KAn−1{\displaystyle \mathrm {1} _{2}\otimes _{K}A_{n-1}},where1d{\displaystyle 1_{d}}is theidentity matrixind{\displaystyle d}dimensions. Meanwhile the joining edges have an adjacency matrixA1⊗K12n−1{\displaystyle A_{1}\otimes _{K}1_{2^{n-1}}}. The sum of these two terms gives a recursive function function for the adjacency matrix of a hypercube: An={12⊗KAn−1+A1⊗K12n−1ifn>1[0110]ifn=1{\displaystyle A_{n}={\begin{cases}1_{2}\otimes _{K}A_{n-1}+A_{1}\otimes _{K}1_{2^{n-1}}&{\text{if }}n>1\\{\begin{bmatrix}0&1\\1&0\end{bmatrix}}&{\text{if }}n=1\end{cases}}} Another construction ofQnis theCartesian productofntwo-vertex complete graphsK2. More generally the Cartesian product of copies of a complete graph is called aHamming graph; the hypercube graphs are examples of Hamming graphs. The graphQ0consists of a single vertex, whileQ1is thecomplete graphon two vertices. Q2is acycleof length4. The graphQ3is the1-skeletonof acubeand is a planar graph with eightverticesand twelveedges. The graphQ4is theLevi graphof theMöbius configuration. It is also theknight's graphfor atoroidal4×4{\displaystyle 4\times 4}chessboard.[1] Every hypercube graph isbipartite: it can becoloredwith only two colors. The two colors of this coloring may be found from the subset construction of hypercube graphs, by giving one color to the subsets that have an even number of elements and the other color to the subsets with an odd number of elements. Every hypercubeQnwithn> 1has aHamiltonian cycle, a cycle that visits each vertex exactly once. Additionally, aHamiltonian pathexists between two verticesuandvif and only if they have different colors in a2-coloring of the graph. Both facts are easy to prove using the principle ofinductionon the dimension of the hypercube, and the construction of the hypercube graph by joining two smaller hypercubes with a matching. Hamiltonicity of the hypercube is tightly related to the theory ofGray codes. More precisely there is abijectivecorrespondence between the set ofn-bit cyclic Gray codes and the set of Hamiltonian cycles in the hypercubeQn.[2]An analogous property holds for acyclicn-bit Gray codes and Hamiltonian paths. A lesser known fact is that every perfect matching in the hypercube extends to a Hamiltonian cycle.[3]The question whether every matching extends to a Hamiltonian cycle remains an open problem.[4] The hypercube graphQn(forn> 1) : The familyQnfor alln> 1is aLévy family of graphs. The problem of finding thelongest pathor cycle that is aninduced subgraphof a given hypercube graph is known as thesnake-in-the-boxproblem. Szymanski's conjectureconcerns the suitability of a hypercube as anetwork topologyfor communications. It states that, no matter how one chooses apermutationconnecting each hypercube vertex to another vertex with which it should be connected, there is always a way to connect these pairs of vertices bypathsthat do not share any directed edge.[9]
https://en.wikipedia.org/wiki/Hypercube_graph
Curve fitting[1][2]is the process of constructing acurve, ormathematical function, that has the best fit to a series ofdata points,[3]possibly subject to constraints.[4][5]Curve fitting can involve eitherinterpolation,[6][7]where an exact fit to the data is required, orsmoothing,[8][9]in which a "smooth" function is constructed that approximately fits the data. A related topic isregression analysis,[10][11]which focuses more on questions ofstatistical inferencesuch as how much uncertainty is present in a curve that is fitted to data observed with random errors. Fitted curves can be used as an aid for data visualization,[12][13]to infer values of a function where no data are available,[14]and to summarize the relationships among two or more variables.[15]Extrapolationrefers to the use of a fitted curve beyond therangeof the observed data,[16]and is subject to adegree of uncertainty[17]since it may reflect the method used to construct the curve as much as it reflects the observed data. For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g.,ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize theorthogonal distanceto the curve (e.g.,total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result.[18][19][20] Most commonly, one fits a function of the formy=f(x). The first degreepolynomialequation is a line withslopea. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates. If the order of the equation is increased to a second degree polynomial, the following results: This will exactly fit a simple curve to three points. If the order of the equation is increased to a third degree polynomial, the following is obtained: This will exactly fit four points. A more general statement would be to say it will exactly fit fourconstraints. Each constraint can be a point,angle, orcurvature(which is the reciprocal of the radius of anosculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are calledend conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a singlespline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highwaycloverleafdesign to understand the rate of change of the forces applied to a car (seejerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly. The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations. If there are more thann+ 1 constraints (nbeing the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting threecollinear points). In general, however, some method is then needed to evaluate each approximation. Theleast squaresmethod is one way to compare the deviations. There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.: The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for both software and humans. Because of this, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable. Other types of curves, such astrigonometric functions(such as sine and cosine), may also be used, in certain cases. In spectroscopy, data may be fitted withGaussian,Lorentzian,Voigtand related functions. In biology, ecology, demography, epidemiology, and many other disciplines, thegrowth of a population, the spread of infectious disease, etc. can be fitted using thelogistic function. Inagriculturethe inverted logisticsigmoid function(S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster. If a function of the formy=f(x){\displaystyle y=f(x)}cannot be postulated, one can still try to fit aplane curve. Other types of curves, such asconic sections(circular, elliptical, parabolic, and hyperbolic arcs) ortrigonometric functions(such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered. For aparametric curve, it is effective to fit each of its coordinates as a separate function ofarc length; assuming that data points can be ordered, thechord distancemay be used.[22] Coope[23]approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques. The above technique is extended to general ellipses[24]by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement. Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically calleduandv. A surface may be composed of one or more surface patches in each direction. Manystatistical packagessuch asRandnumerical softwaresuch as thegnuplot,GNU Scientific Library,Igor Pro,MLAB,Maple,MATLAB, TK Solver 6.0,Scilab,Mathematica,GNU Octave, andSciPyinclude commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in thelists of statisticalandnumerical-analysis programsas well as inCategory:Regression and curve fitting software.
https://en.wikipedia.org/wiki/Curve_fitting
Instatistics, theGauss–Markov theorem(or simplyGauss theoremfor some authors)[1]states that theordinary least squares(OLS) estimator has the lowestsampling variancewithin theclassoflinearunbiasedestimators, if theerrorsin thelinear regression modelareuncorrelated, haveequal variancesand expectation value of zero.[2]The errors do not need to benormal, nor do they need to beindependent and identically distributed(onlyuncorrelatedwith mean zero andhomoscedasticwith finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, theJames–Stein estimator(which also drops linearity),ridge regression, or simply anydegenerateestimator. The theorem was named afterCarl Friedrich GaussandAndrey Markov, although Gauss' work significantly predates Markov's.[3]But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above.[4]A further generalization tonon-spherical errorswas given byAlexander Aitken.[5] Suppose we are given two random variable vectors,X,Y∈Rk{\displaystyle X{\text{, }}Y\in \mathbb {R} ^{k}}and that we want to find the best linear estimator ofY{\displaystyle Y}givenX{\displaystyle X}, using the best linear estimatorY^=αX+μ{\displaystyle {\hat {Y}}=\alpha X+\mu }Where the parametersα{\displaystyle \alpha }andμ{\displaystyle \mu }are both real numbers. Such an estimatorY^{\displaystyle {\hat {Y}}}would have the same mean and standard deviation asY{\displaystyle Y}, that is,μY^=μY,σY^=σY{\displaystyle \mu _{\hat {Y}}=\mu _{Y},\sigma _{\hat {Y}}=\sigma _{Y}}. Therefore, if the vectorX{\displaystyle X}has respective mean and standard deviationμx,σx{\displaystyle \mu _{x},\sigma _{x}}, the best linear estimator would be Y^=σy(X−μx)σx+μy{\displaystyle {\hat {Y}}=\sigma _{y}{\frac {(X-\mu _{x})}{\sigma _{x}}}+\mu _{y}} sinceY^{\displaystyle {\hat {Y}}}has the same mean and standard deviation asY{\displaystyle Y}. Suppose we have, in matrix notation, the linear relationship expanding to, whereβj{\displaystyle \beta _{j}}are non-random butunobservable parameters,Xij{\displaystyle X_{ij}}are non-random and observable (called the "explanatory variables"),εi{\displaystyle \varepsilon _{i}}are random, and soyi{\displaystyle y_{i}}are random. The random variablesεi{\displaystyle \varepsilon _{i}}are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; seeerrors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variableβK+1{\displaystyle \beta _{K+1}}with a newly introduced last column of X being unity i.e.,Xi(K+1)=1{\displaystyle X_{i(K+1)}=1}for alli{\displaystyle i}. Note that thoughyi,{\displaystyle y_{i},}as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under theonlycondition of knowingXij,{\displaystyle X_{ij},}but notyi.{\displaystyle y_{i}.} TheGauss–Markovassumptions concern the set of error random variables,εi{\displaystyle \varepsilon _{i}}: Alinear estimatorofβj{\displaystyle \beta _{j}}is a linear combination in which the coefficientscij{\displaystyle c_{ij}}are not allowed to depend on the underlying coefficientsβj{\displaystyle \beta _{j}}, since those are not observable, but are allowed to depend on the valuesXij{\displaystyle X_{ij}}, since these data are observable. (The dependence of the coefficients on eachXij{\displaystyle X_{ij}}is typically nonlinear; the estimator is linear in eachyi{\displaystyle y_{i}}and hence in each randomε,{\displaystyle \varepsilon ,}which is why this is"linear" regression.) The estimator is said to beunbiasedif and only if regardless of the values ofXij{\displaystyle X_{ij}}. Now, let∑j=1Kλjβj{\textstyle \sum _{j=1}^{K}\lambda _{j}\beta _{j}}be some linear combination of the coefficients. Then themean squared errorof the corresponding estimation is in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) Thebest linear unbiased estimator(BLUE) of the vectorβ{\displaystyle \beta }of parametersβj{\displaystyle \beta _{j}}is one with the smallest mean squared error for every vectorλ{\displaystyle \lambda }of linear combination parameters. This is equivalent to the condition that is a positive semi-definite matrix for every other linear unbiased estimatorβ~{\displaystyle {\widetilde {\beta }}}. Theordinary least squares estimator (OLS)is the function ofy{\displaystyle y}andX{\displaystyle X}(whereXT{\displaystyle X^{\operatorname {T} }}denotes thetransposeofX{\displaystyle X}) that minimizes thesum of squares ofresiduals(misprediction amounts): The theorem now states that the OLS estimator is a best linear unbiased estimator (BLUE). The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combinationa1y1+⋯+anyn{\displaystyle a_{1}y_{1}+\cdots +a_{n}y_{n}}whose coefficients do not depend upon the unobservableβ{\displaystyle \beta }but whose expected value is always zero. Proof that the OLS indeedminimizesthe sum of squares of residuals may proceed as follows with a calculation of theHessian matrixand showing that it is positive definite. The MSE function we want to minimize isf(β0,β1,…,βp)=∑i=1n(yi−β0−β1xi1−⋯−βpxip)2{\displaystyle f(\beta _{0},\beta _{1},\dots ,\beta _{p})=\sum _{i=1}^{n}(y_{i}-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}}for a multiple regression model withpvariables. The first derivative isddβf=−2XT(y−Xβ)=−2[∑i=1n(yi−⋯−βpxip)∑i=1nxi1(yi−⋯−βpxip)⋮∑i=1nxip(yi−⋯−βpxip)]=0p+1,{\displaystyle {\begin{aligned}{\frac {d}{d{\boldsymbol {\beta }}}}f&=-2X^{\operatorname {T} }\left(\mathbf {y} -X{\boldsymbol {\beta }}\right)\\&=-2{\begin{bmatrix}\sum _{i=1}^{n}(y_{i}-\dots -\beta _{p}x_{ip})\\\sum _{i=1}^{n}x_{i1}(y_{i}-\dots -\beta _{p}x_{ip})\\\vdots \\\sum _{i=1}^{n}x_{ip}(y_{i}-\dots -\beta _{p}x_{ip})\end{bmatrix}}\\&=\mathbf {0} _{p+1},\end{aligned}}}whereXT{\displaystyle X^{\operatorname {T} }}is the design matrixX=[1x11⋯x1p1x21⋯x2p⋮1xn1⋯xnp]∈Rn×(p+1);n≥p+1{\displaystyle X={\begin{bmatrix}1&x_{11}&\cdots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\&&\vdots \\1&x_{n1}&\cdots &x_{np}\end{bmatrix}}\in \mathbb {R} ^{n\times (p+1)};\qquad n\geq p+1} TheHessian matrixof second derivatives isH=2[n∑i=1nxi1⋯∑i=1nxip∑i=1nxi1∑i=1nxi12⋯∑i=1nxi1xip⋮⋮⋱⋮∑i=1nxip∑i=1nxipxi1⋯∑i=1nxip2]=2XTX{\displaystyle {\mathcal {H}}=2{\begin{bmatrix}n&\sum _{i=1}^{n}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}\\\sum _{i=1}^{n}x_{i1}&\sum _{i=1}^{n}x_{i1}^{2}&\cdots &\sum _{i=1}^{n}x_{i1}x_{ip}\\\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{n}x_{ip}&\sum _{i=1}^{n}x_{ip}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}^{2}\end{bmatrix}}=2X^{\operatorname {T} }X} Assuming the columns ofX{\displaystyle X}are linearly independent so thatXTX{\displaystyle X^{\operatorname {T} }X}is invertible, letX=[v1v2⋯vp+1]{\displaystyle X={\begin{bmatrix}\mathbf {v_{1}} &\mathbf {v_{2}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}}, thenk1v1+⋯+kp+1vp+1=0⟺k1=⋯=kp+1=0{\displaystyle k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}=\mathbf {0} \iff k_{1}=\dots =k_{p+1}=0} Now letk=(k1,…,kp+1)T∈R(p+1)×1{\displaystyle \mathbf {k} =(k_{1},\dots ,k_{p+1})^{T}\in \mathbb {R} ^{(p+1)\times 1}}be an eigenvector ofH{\displaystyle {\mathcal {H}}}. k≠0⟹(k1v1+⋯+kp+1vp+1)2>0{\displaystyle \mathbf {k} \neq \mathbf {0} \implies \left(k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}\right)^{2}>0} In terms of vector multiplication, this means[k1⋯kp+1][v1⋮vp+1][v1⋯vp+1][k1⋮kp+1]=kTHk=λkTk>0{\displaystyle {\begin{bmatrix}k_{1}&\cdots &k_{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} \\\vdots \\\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}k_{1}\\\vdots \\k_{p+1}\end{bmatrix}}=\mathbf {k} ^{\operatorname {T} }{\mathcal {H}}\mathbf {k} =\lambda \mathbf {k} ^{\operatorname {T} }\mathbf {k} >0}whereλ{\displaystyle \lambda }is the eigenvalue corresponding tok{\displaystyle \mathbf {k} }. Moreover,kTk=∑i=1p+1ki2>0⟹λ>0{\displaystyle \mathbf {k} ^{\operatorname {T} }\mathbf {k} =\sum _{i=1}^{p+1}k_{i}^{2}>0\implies \lambda >0} Finally, as eigenvectork{\displaystyle \mathbf {k} }was arbitrary, it means all eigenvalues ofH{\displaystyle {\mathcal {H}}}are positive, thereforeH{\displaystyle {\mathcal {H}}}is positive definite. Thus,β=(XTX)−1XTY{\displaystyle {\boldsymbol {\beta }}=\left(X^{\operatorname {T} }X\right)^{-1}X^{\operatorname {T} }Y}is indeed a global minimum. Or, just see that for all vectorsv,vTXTXv=‖Xv‖2≥0{\displaystyle \mathbf {v} ,\mathbf {v} ^{\operatorname {T} }X^{\operatorname {T} }X\mathbf {v} =\|\mathbf {X} \mathbf {v} \|^{2}\geq 0}. So the Hessian is positive definite if full rank. Letβ~=Cy{\displaystyle {\tilde {\beta }}=Cy}be another linear estimator ofβ{\displaystyle \beta }withC=(XTX)−1XT+D{\displaystyle C=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }+D}whereD{\displaystyle D}is aK×n{\displaystyle K\times n}non-zero matrix. As we're restricting tounbiasedestimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that ofβ^,{\displaystyle {\widehat {\beta }},}the OLS estimator. We calculate: Therefore, sinceβ{\displaystyle \beta }isunobservable,β~{\displaystyle {\tilde {\beta }}}is unbiased if and only ifDX=0{\displaystyle DX=0}. Then: SinceDDT{\displaystyle DD^{\operatorname {T} }}is a positive semidefinite matrix,Var⁡(β~){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)}exceedsVar⁡(β^){\displaystyle \operatorname {Var} \left({\widehat {\beta }}\right)}by a positive semidefinite matrix. As it has been stated before, the condition ofVar⁡(β~)−Var⁡(β^){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)-\operatorname {Var} \left({\widehat {\beta }}\right)}is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }isℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\widehat {\beta }}}(best in the sense that it has minimum variance). To see this, letℓTβ~{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}}another linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }. Moreover, equality holds if and only ifDTℓ=0{\displaystyle D^{\operatorname {T} }\ell =0}. We calculate This proves that the equality holds if and only ifℓTβ~=ℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}=\ell ^{\operatorname {T} }{\widehat {\beta }}}which gives the uniqueness of the OLS estimator as a BLUE. Thegeneralized least squares(GLS), developed byAitken,[5]extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix.[6]The Aitken estimator is also a BLUE. In most treatments of OLS, the regressors (parameters of interest) in thedesign matrixX{\displaystyle \mathbf {X} }are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science likeeconometrics.[7]Instead, the assumptions of the Gauss–Markov theorem are stated conditional onX{\displaystyle \mathbf {X} }. The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equationy=β0+β1x2,{\displaystyle y=\beta _{0}+\beta _{1}x^{2},}qualifies as linear whiley=β0+β12x{\displaystyle y=\beta _{0}+\beta _{1}^{2}x}can be transformed to be linear by replacingβ12{\displaystyle \beta _{1}^{2}}by another parameter, sayγ{\displaystyle \gamma }. An equation with a parameter dependent on an independent variable does not qualify as linear, for exampley=β0+β1(x)⋅x{\displaystyle y=\beta _{0}+\beta _{1}(x)\cdot x}, whereβ1(x){\displaystyle \beta _{1}(x)}is a function ofx{\displaystyle x}. Data transformationsare often used to convert an equation into a linear form. For example, theCobb–Douglas function—often used in economics—is nonlinear: But it can be expressed in linear form by taking thenatural logarithmof both sides:[8] This assumption also covers specification issues: assuming that the proper functional form has been selected and there are noomitted variables. One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation. For alln{\displaystyle n}observations, the expectation—conditional on the regressors—of the error term is zero:[9] wherexi=[xi1xi2⋯xik]T{\displaystyle \mathbf {x} _{i}={\begin{bmatrix}x_{i1}&x_{i2}&\cdots &x_{ik}\end{bmatrix}}^{\operatorname {T} }}is the data vector of regressors for theith observation, and consequentlyX=[x1Tx2T⋯xnT]T{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {x} _{1}^{\operatorname {T} }&\mathbf {x} _{2}^{\operatorname {T} }&\cdots &\mathbf {x} _{n}^{\operatorname {T} }\end{bmatrix}}^{\operatorname {T} }}is the data matrix or design matrix. Geometrically, this assumption implies thatxi{\displaystyle \mathbf {x} _{i}}andεi{\displaystyle \varepsilon _{i}}areorthogonalto each other, so that theirinner product(i.e., their cross moment) is zero. This assumption is violated if the explanatory variables aremeasured with error, or areendogenous.[10]Endogeneity can be the result ofsimultaneity, where causality flows back and forth between both the dependent and independent variable.Instrumental variabletechniques are commonly used to address this problem. The sample data matrixX{\displaystyle \mathbf {X} }must have full columnrank. OtherwiseXTX{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} }is not invertible and the OLS estimator cannot be computed. A violation of this assumption isperfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term.[11] Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data.[12]Multicollinearity can be detected fromcondition numberor thevariance inflation factor, among other tests. Theouter productof the error vector must be spherical. This implies the error term has uniform variance (homoscedasticity) and noserial correlation.[13]If this assumption is violated, OLS is still unbiased, butinefficient. The term "spherical errors" will describe themultivariate normal distribution: ifVar⁡[ε∣X]=σ2I{\displaystyle \operatorname {Var} [\,{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]=\sigma ^{2}\mathbf {I} }in the multivariate normal density, then the equationf(ε)=c{\displaystyle f(\varepsilon )=c}is the formula for aballcentered at μ with radius σ in n-dimensional space.[14] Heteroskedasticityoccurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time. This assumption is violated when there isautocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation. When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE.[6]
https://en.wikipedia.org/wiki/Best_linear_unbiased_estimator
Zipf's law(/zɪf/;German pronunciation:[tsɪpf]) is anempirical lawstating that when a list of measured values is sorted in decreasing order, the value of then-th entry is often approximatelyinversely proportionalton. The best known instance of Zipf's law applies to thefrequency tableof words in a text orcorpusofnatural language: wordfrequency∝1wordrank.{\displaystyle \ {\mathsf {word\ frequency}}\ \propto \ {\frac {1}{\ {\mathsf {word\ rank}}\ }}~.} It is usually found that the most common word occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in theBrown Corpusof American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852).[2]It is often used in the following form, calledZipf-Mandelbrot law: frequency∝1(rank+b)a{\displaystyle \ {\mathsf {frequency}}\ \propto \ {\frac {1}{\ \left(\ {\mathsf {rank}}+b\ \right)^{a}\ }}\ }wherea{\displaystyle \ a\ }andb{\displaystyle \ b\ }are fitted parameters, witha≈1{\displaystyle \ a\approx 1}, andb≈2.7{\displaystyle \ b\approx 2.7~}.[1] This law is named after the American linguistGeorge Kingsley Zipf,[3][4][5]and is still an important concept inquantitative linguistics. It has been found to apply to many other types of data studied in thephysicalandsocialsciences. Inmathematical statistics, the concept has been formalized as theZipfian distribution: A family of related discreteprobability distributionswhoserank-frequency distributionis an inversepower lawrelation. They are related toBenford's lawand thePareto distribution. Some sets of time-dependent empirical data deviate somewhat from Zipf's law. Such empirical distributions are said to bequasi-Zipfian. In 1913, the German physicistFelix Auerbachobserved an inverse proportionality between the population sizes of cities, and their ranks when sorted by decreasing order of that variable.[6] Zipf's law had been discovered before Zipf,[a]first by the French stenographerJean-Baptiste Estoupin 1916,[8][7]and also byG. Deweyin 1923,[9]and byE. Condonin 1928.[10] The same relation for frequencies of words in natural language texts was observed by George Zipf in 1932,[4]but he never claimed to have originated it. In fact, Zipf did not like mathematics. In his 1932 publication,[11]the author speaks with disdain about mathematical involvement in linguistics,a.o. ibidem, p. 21: The only mathematical expression Zipf used looks likeab2= constant,which he "borrowed" fromAlfred J. Lotka's 1926 publication.[12] The same relationship was found to occur in many other contexts, and for other variables besides frequency.[1]For example, when corporations are ranked by decreasing size, their sizes are found to be inversely proportional to the rank.[13]The same relation is found for personal incomes (where it is calledPareto principle[14]), number of people watching the same TV channel,[15]notesin music,[16]cellstranscriptomes,[17][18]and more. In 1992 bioinformaticianWentian Lipublished a short paper[19]showing that Zipf's law emerges even in randomly generated texts. It included proof that the power law form of Zipf's law was a byproduct of ordering words by rank. Formally, the Zipf distribution onNelements assigns to the element of rankk(counting from 1) the probability: f(k;N)={1HN1k,if1≤k≤N,0,ifk<1orN<k.{\displaystyle \ f(k;N)~=~{\begin{cases}{\frac {1}{\ H_{N}}}\ {\frac {1}{\ k\ }}\ ,&\ {\mbox{ if }}\ 1\leq k\leq N~,\\{}\\~~0~~,&\ {\mbox{ if }}\ k<1\ {\mbox{ or }}\ N<k~.\end{cases}}}whereHNis a normalization constant: TheNthharmonic number: HN≡∑k=1N1k.{\displaystyle H_{N}\equiv \sum _{k=1}^{N}{\frac {\ 1\ }{k}}~.} The distribution is sometimes generalized to an inverse power law with exponentsinstead of1 .[20]Namely, f(k;N,s)=1HN,s1ks{\displaystyle f(k;N,s)={\frac {1}{H_{N,s}}}\,{\frac {1}{k^{s}}}} whereHN,sis ageneralized harmonic number HN,s=∑k=1N1ks.{\displaystyle H_{N,s}=\sum _{k=1}^{N}{\frac {1}{k^{s}}}~.} The generalized Zipf distribution can be extended to infinitely many items (N= ∞) only if the exponentsexceeds1 .In that case, the normalization constantHN,sbecomesRiemann's zeta function, ζ(s)=∑k=1∞1ks<∞.{\displaystyle \zeta (s)=\sum _{k=1}^{\infty }{\frac {1}{k^{s}}}<\infty ~.} The infinite item case is characterized by theZeta distributionand is calledLotka's law. If the exponentsis1or less, the normalization constantHN,sdiverges asNtends to infinity. Empirically, a data set can be tested to see whether Zipf's law applies by checking thegoodness of fitof an empirical distribution to the hypothesized power law distribution with aKolmogorov–Smirnov test, and then comparing the (log) likelihood ratio of the power law distribution to alternative distributions like an exponential distribution or lognormal distribution.[21] Zipf's law can be visualized byplottingthe item frequency data on alog-loggraph, with the axes being thelogarithmof rank order, and logarithm of frequency. The data conform to Zipf's law with exponentsto the extent that the plot approximates alinear(more precisely,affine) function with slope−s. For exponents= 1,one can also plot the reciprocal of the frequency (mean interword interval) against rank, or the reciprocal of rank against frequency, and compare the result with the line through the origin with slope1 .[3] Although Zipf's Law holds for most natural languages, and even certainartificial onessuch asEsperanto[22]andToki Pona,[23]the reason is still not well understood.[24]Recent reviews of generative processes for Zipf's law includeMitzenmacher, "A Brief History of Generative Models for Power Law and Lognormal Distributions",[25]and Simkin, "Re-inventing Willis".[26] However, it may be partly explained by statistical analysis of randomly generated texts. Wentian Li has shown that in a document in which each character has been chosen randomly from a uniform distribution of all letters (plus a space character), the "words" with different lengths follow the macro-trend of Zipf's law (the more probable words are the shortest and have equal probability).[19]In 1959,Vitold Belevitchobserved that if any of a large class of well-behavedstatistical distributions(not only thenormal distribution) is expressed in terms of rank and expanded into aTaylor series, the first-order truncation of the series results in Zipf's law. Further, a second-order truncation of the Taylor series resulted inMandelbrot's law.[27][28] Theprinciple of least effortis another possible explanation: Zipf himself proposed that neither speakers nor hearers using a given language wants to work any harder than necessary to reach understanding, and the process that results in approximately equal distribution of effort leads to the observed Zipf distribution.[5][29] A minimal explanation assumes that words are generated bymonkeys typing randomly. If language is generated by a single monkey typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the words (letter strings separated by white spaces) produced by the monkey follows Zipf's law.[30] Another possible cause for the Zipf distribution is apreferential attachmentprocess, in which the valuexof an item tends to grow at a rate proportional tox(intuitively, "the rich get richer" or "success breeds success"). Such a growth process results in theYule–Simon distribution, which has been shown to fit word frequency versus rank in language[31]and population versus city rank[32]better than Zipf's law. It was originally derived to explain population versus rank in species by Yule, and applied to cities by Simon. A similar explanation is based on atlas models, systems of exchangeable positive-valueddiffusion processeswith drift and variance parameters that depend only on the rank of the process. It has been shown mathematically that Zipf's law holds for Atlas models that satisfy certain natural regularity conditions.[33][34] A generalization of Zipf's law is theZipf–Mandelbrot law, proposed byBenoit Mandelbrot, whose frequencies are: f(k;N,q,s)=1C1(k+q)s.{\displaystyle f(k;N,q,s)={\frac {1}{\ C\ }}\ {\frac {1}{\ \left(k+q\right)^{s}}}~.}[clarification needed] The constantCis theHurwitz zeta functionevaluated ats. Zipfian distributions can be obtained fromPareto distributionsby an exchange of variables.[20] The Zipf distribution is sometimes called thediscrete Pareto distribution[35]because it is analogous to the continuousPareto distributionin the same way that thediscrete uniform distributionis analogous to thecontinuous uniform distribution. The tail frequencies of theYule–Simon distributionare approximately f(k;ρ)≈[constant]k(ρ+1){\displaystyle f(k;\rho )\approx {\frac {\ [{\mathsf {constant}}]\ }{k^{(\rho +1)}}}}for any choice ofρ> 0. In theparabolic fractal distribution, the logarithm of the frequency is a quadratic polynomial of the logarithm of the rank. This can markedly improve the fit over a simple power-law relationship.[36]Like fractal dimension, it is possible to calculate Zipf dimension, which is a useful parameter in the analysis of texts.[37] It has been argued thatBenford's lawis a special bounded case of Zipf's law,[36]with the connection between these two laws being explained by their both originating from scale invariant functional relations from statistical physics and critical phenomena.[38]The ratios of probabilities in Benford's law are not constant. The leading digits of data satisfying Zipf's law withs = 1,satisfy Benford's law. Following Auerbach's 1913 observation, there has been substantial examination of Zipf's law for city sizes.[39]However, more recent empirical[40][41]and theoretical[42]studies have challenged the relevance of Zipf's law for cities. In many texts in human languages, word frequencies approximately follow a Zipf distribution with exponentsclose to 1; that is, the most common word occurs aboutntimes then-th most common one. The actual rank-frequency plot of a natural language text deviates in some extent from the ideal Zipf distribution, especially at the two ends of the range. The deviations may depend on the language, on the topic of the text, on the author, on whether the text was translated from another language, and on the spelling rules used.[citation needed]Some deviation is inevitable because ofsampling error. At the low-frequency end, where the rank approachesN, the plot takes a staircase shape, because each word can occur only an integer number of times. In someRomance languages, the frequencies of the dozen or so most frequent words deviate significantly from the ideal Zipf distribution, because of those words include articles inflected forgrammatical genderandnumber.[citation needed] In many East Asian languages, such asChinese,Tibetan, andVietnamese, eachmorpheme(word or word piece) consists of a singlesyllable; a word of English being often translated to a compound of two such syllables. The rank-frequency table for those morphemes deviates significantly from the ideal Zipf law, at both ends of the range.[citation needed] Even in English, the deviations from the ideal Zipf's law become more apparent as one examines large collections of texts. Analysis of a corpus of 30,000 English texts showed that only about 15% of the texts in it have a good fit to Zipf's law. Slight changes in the definition of Zipf's law can increase this percentage up to close to 50%.[45] In these cases, the observed frequency-rank relation can be modeled more accurately as by separate Zipf–Mandelbrot laws distributions for different subsets or subtypes of words. This is the case for the frequency-rank plot of the first 10 million words of the English Wikipedia. In particular, the frequencies of the closed class offunction wordsin English is better described withslower than 1, while open-ended vocabulary growth with document size and corpus size requiresgreater than 1 for convergence of theGeneralized Harmonic Series.[3] When a text is encrypted in such a way that every occurrence of each distinct plaintext word is always mapped to the same encrypted word (as in the case of simplesubstitution ciphers, like theCaesar ciphers, or simplecodebookciphers), the frequency-rank distribution is not affected. On the other hand, if separate occurrences of the same word may be mapped to two or more different words (as happens with theVigenère cipher), the Zipf distribution will typically have a flat part at the high-frequency end.[citation needed] Zipf's law has been used for extraction of parallel fragments of texts out of comparable corpora.[46]Laurance Doyleand others have suggested the application of Zipf's law for detection ofalien languagein thesearch for extraterrestrial intelligence.[47][48] The frequency-rank word distribution is often characteristic of the author and changes little over time. This feature has been used in the analysis of texts for authorship attribution.[49][50] The word-like sign groups of the 15th-century codexVoynich Manuscripthave been found to satisfy Zipf's law, suggesting that text is most likely not a hoax but rather written in an obscure language or cipher.[51][52] Recent analysis ofwhale vocalizationsamples shows they contain recurring phonemes whose distribution appears to closely obey Zipf's Law.[53]While this isn't proof that whale communication is a natural language, it is an intriguing discovery.
https://en.wikipedia.org/wiki/Zipf%27s_law
Thebase station subsystem(BSS) is the section of a traditionalcellular telephone networkwhich is responsible for handling traffic and signaling between amobile phoneand the network switching subsystem. The BSS carries outtranscodingof speech channels, allocation of radio channels to mobile phones,paging,transmissionandreceptionover theair interfaceand many other tasks related to the radio network. Thebase transceiver station, or BTS, contains the equipment for transmitting and receiving radio signals (transceivers),antennas, and equipment forencryptingand decrypting communications with thebase station controller(BSC). Typically a BTS for anything other than apicocellwill have several transceivers (TRXs) which allow it to serve several differentfrequenciesand different sectors of the cell (in the case of sectorised base stations). A BTS is controlled by a parent BSC via the "base station control function" (BCF). The BCF is implemented as a discrete unit or even incorporated in a TRX in compact base stations. The BCF provides an operations and maintenance (O&M) connection to the network management system (NMS), and manages operational states of each TRX, as well as software handling and alarm collection. The functions of a BTS vary depending on the cellular technology used and the cellular telephone provider. There are vendors in which the BTS is a plain transceiver which receives information from the MS (mobile station) through theUm air interfaceand then converts it to a TDM (PCM) based interface, the Abis interface, and sends it towards the BSC. There are vendors which build their BTSs so the information is preprocessed, target cell lists are generated and even intracell handover (HO) can be fully handled. The advantage in this case is less load on the expensive Abis interface. The BTSs are equipped with radios that are able to modulate layer 1 of interface Um; for GSM 2G+ the modulation type isGaussian minimum-shift keying(GMSK), while forEDGE-enabled networks it is GMSK and8-PSK. This modulation is a kind of continuous-phasefrequency-shift keying. In GMSK, the signal to be modulated onto the carrier is first smoothed with aGaussianlow-pass filterprior to being fed to afrequency modulator, which greatly reduces the interference to neighboring channels (adjacent-channel interference). Antenna combiners are implemented to use the same antenna for several TRXs (carriers), the more TRXs are combined the greater the combiner loss will be. Up to 8:1 combiners are found in micro and pico cells only. Frequency hoppingis often used to increase overall BTS performance; this involves the rapid switching of voice traffic between TRXs in a sector. A hopping sequence is followed by the TRXs and handsets using the sector. Several hopping sequences are available, and the sequence in use for a particular cell is continually broadcast by that cell so that it is known to the handsets. A TRX transmits and receives according to theGSMstandards, which specify eightTDMAtimeslots per radio frequency. A TRX may lose some of this capacity as some information is required to bebroadcastto handsets in the area that the BTS serves. This information allows the handsets to identify the network and gain access to it. This signalling makes use of a channel known as theBroadcast Control Channel(BCCH). By using directional antennas on a base station, each pointing in different directions, it is possible to sectorise the base station so that several different cells are served from the same location. Typically thesedirectional antennashave a beamwidth of 65 to 85 degrees. This increases the traffic capacity of the base station (each frequency can carry eight voice channels) whilst not greatly increasing theinterferencecaused to neighboring cells (in any given direction, only a small number of frequencies are being broadcast). Typically two antennas are used per sector, at spacing of ten or morewavelengthsapart. This allows the operator to overcome the effects offadingdue to physical phenomena such asmultipath reception. Someamplificationof the received signal as it leaves the antenna is often used to preserve the balance between uplink and downlink signal.[1] The base station controller (BSC) provides, classically, theintelligencebehind the BTSs. Typically a BSC has tens or even hundreds of BTSs under its control. The BSC handles allocation of radio channels, receives measurements from the mobile phones, and controls handovers from BTS to BTS (except in the case of an inter-BSC handover in which case control is in part the responsibility of theanchor MSC). A key function of the BSC is to act as aconcentratorwhere many different low capacity connections to BTSs (with relatively low utilisation) become reduced to a smaller number of connections towards themobile switching center(MSC) (with a high level of utilisation). Overall, this means that networks are often structured to have many BSCs distributed into regions near their BTSs which are then connected to large centralised MSC sites. The BSC is undoubtedly the most robust element in the BSS as it is not only a BTS controller but, for some vendors, a full switching center, as well as anSS7node with connections to the MSC andserving GPRS support node(SGSN) (when usingGPRS). It also provides all the required data to the operation support subsystem (OSS) as well as to the performance measuring centers. A BSC is often based on a distributed computing architecture, with redundancy applied to critical functional units to ensure availability in the event of fault conditions. Redundancy often extends beyond the BSC equipment itself and is commonly used in the power supplies and in the transmission equipment providing the A-ter interface to PCU. The databases for all the sites, including information such ascarrier frequencies, frequency hopping lists, power reduction levels, receiving levels for cell border calculation, are stored in the BSC. This data is obtained directly from radio planning engineering which involves modelling of thesignal propagationas well as traffic projections. The transcoder is responsible fortranscodingthe voice channel coding between the coding used in the mobile network, and the coding used by the world's terrestrial circuit-switched network, thePublic Switched Telephone Network. Specifically, GSM uses aregular pulse excited-long term prediction(RPE-LTP) coder for voice data between the mobile device and the BSS, butpulse-code modulation(A-laworμ-lawstandardized inITU G.711) upstream of the BSS. RPE-LPC coding results in a data rate for voice of 13 kbit/s where standard PCM coding results in 64 kbit/s. Because of this change in data ratefor the same voice call, the transcoder also has a buffering function so that PCM 8-bit words can be recoded to construct GSM 20 ms traffic blocks. Although transcoding (compressing/decompressing) functionality is defined as a base station function by the relevant standards, there are several vendors which have implemented the solution outside of the BSC. Some vendors have implemented it in a stand-alone rack using a proprietary interface. InSiemens' andNokia's architecture, the transcoder is an identifiable separate sub-system which will normally be co-located with the MSC. In some ofEricsson's systems it is integrated to the MSC rather than the BSC. The reason for these designs is that if the compression of voice channels is done at the site of the MSC, the number of fixed transmission links between the BSS and MSC can be reduced, decreasing network infrastructure costs. This subsystem is also referred to as thetranscoder and rate adaptation unit(TRAU). Some networks use 32 kbit/sADPCMon the terrestrial side of the network instead of 64 kbit/sPCMand the TRAU converts accordingly. When the traffic is not voice but data such as fax or email, the TRAU enables its rate adaptation unit function to give compatibility between the BSS and MSC data rates. The packet control unit (PCU) is a late addition to the GSM standard. It performs some of the processing tasks of the BSC, but for packet data. The allocation of channels between voice and data is controlled by the base station, but once a channel is allocated to the PCU, the PCU takes full control over that channel. The PCU can be built into the base station, built into the BSC or even, in some proposed architectures, it can be at the SGSN site. In most of the cases, the PCU is a separate node communicating extensively with the BSC on the radio side and the SGSN on the Gb side.
https://en.wikipedia.org/wiki/Base_station_subsystem
NFC-WIisNFCwiredinterfacehaving 2 wires SIGIN (signal-in) and SIGOUT (signal-out).[1]It is also called S2C (SignalIn/SignalOut Connection) interface.[2]In 2006,ECMAstandardized the NFC wired interface with specificationECMA-373(ECMA, 2006).[3] It has three modes of operation: off, wired and virtual mode. In off mode, there is no communication with the SE. In wired mode, the SE is visible to the internal NFC controller.[4]In virtual mode, the SE is visible to external RF readers. These modes are naturally mutually exclusive. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/NFC-WI
Television encryption, often referred to asscrambling, isencryptionused to control access topay televisionservices, usuallycable,satellite, orInternet Protocol television(IPTV) services. Pay televisionexists to make revenue fromsubscribers, and sometimes those subscribers do not pay. The prevention ofpiracyon cable and satellite networks has been one of the main factors in the development of Pay TV encryption systems. The early cable-based Pay TV networks used no security. This led to problems with people connecting to the network without paying. Consequently, some methods were developed to frustrate these self-connectors. The early Pay TV systems for cable television were based on a number of simple measures. The most common of these was a channel-based filter that would effectively stop the channel being received by those who had not subscribed. These filters would be added or removed according to the subscription. As the number of television channels on these cable networks grew, the filter-based approach became increasingly impractical. Other techniques, such as adding an interfering signal to the video or audio, began to be used as the simple filter solutions were easily bypassed. As the technology evolved, addressable set-top boxes became common, and more complex scrambling techniques such as digital encryption of the audio or video cut and rotate (where a line of video is cut at a particular point and the two parts are then reordered around this point) were applied to signals. Encryption was used to protect satellite-distributed feeds for cable television networks. Some of the systems used for cable feed distribution were expensive. As theDTHmarket grew, less secure systems began to be used. Many of these systems (such asOak Orion) were variants of cable television scrambling systems that affected the synchronisation part of the video, inverted the video signal, or added an interfering frequency to the video. All of these analogue scrambling techniques were easily defeated. In France,Canal+launched a scrambled service in 1984. It was also claimed that it was an unbreakable system. Unfortunately for that company, an electronics magazine, "Radio Plans", published a design for a pirate decoder within a month of the channel launching.[citation needed] In the US,HBOwas one of the first services to encrypt its signal using the VideoCipher II system. In Europe,FilmNetscrambled its satellite service in September 1986, thus creating one of the biggest markets for pirate satellite TV decoders in the world, because the system that FilmNet used was easily hacked. One of FilmNet's main attractions was that it would screen hard-core porn films on various nights of the week. The VideoCipher II system proved somewhat more difficult to hack, but it eventually fell prey to the pirates.[citation needed] Analoganddigitalpay television have severalconditional accesssystems that are used forpay-per-view(PPV) and other subscriber related services. Originally, analog-only cable television systems relied on set-top boxes to control access to programming, as television sets originally were not "cable-ready". Analog encryption was typically limited to premium channels such as HBO or channels with adult-oriented content. In those cases, various proprietary video synchronization suppression methods were used to control access to programming. In some of these systems, the necessary sync signal was on a separate subcarrier though sometimes the sync polarity is simply inverted, in which case, if used in conjunction withPAL, aSECAM LTV with a cable tuner can be used to partially descramble the signal though only in black and white and with invertedluminanceand thus a multi standard TV which supports PAL L is preferred to decode the color as well. This, however will lead to a part of the video signal being received as audio as well and thus another TV with preferably no auto mute should be used for audio decoding. Analog set-top boxes have largely been replaced by digital set-top boxes that can directly control access to programming as well as digitally decrypt signals. Although several analog encryption types were tested in the early 1980s,VideoCipher IIbecame the de facto analog encryption standard thatC-Bandsatellite pay TV channels used. Early adopters of VCII were HBO and Cinemax, encrypting full time beginning in January 1986; Showtime and The Movie Channel beginning in May 1986; and CNN and Headline news, in July of that year. VideoCipher II was replaced as a standard by VCII+ in the early 1990s, and it in turn was replaced by VCII+ RS. A VCII-capable satellite receiver is required to decode VCII channels. VCII has largely been replaced byDigiCipher 2inNorth America. Originally, VCII-based receivers had a separate modem technology for pay-per-view access known as Videopal. This technology became fully integrated in later-generation analog satellite television receivers. DigiCipher 2is General Instrument's proprietary video distribution system. DigiCipher 2 is based upon MPEG-2. A4DTVsatellite receiver is required to decode DigiCipher 2 channels. In North America, mostdigital cableprogramming is accessed with DigiCipher 2-based set-top boxes. DigiCipher 2 may also be referred to as DCII. PowerVuis another popular digital encryption technology used for non-residential usage. PowerVu was developed byScientific Atlanta. Other commercial digital encryption systems are,Nagravision(by Kudelski),Viaccess(by France Telecom), andWegener. In the US, bothDirecTVandDish Networkdirect-broadcast satellitesystems use digital encryption standards for controlling access to programming. DirecTV usesVideoGuard, a system designed byNDS. DirecTV has been cracked in the past, which led to an abundance of cracked smartcards being available on the black market. However, a switch to a stronger form of smart card (the P4 card) wiped out DirectTV piracy soon after it was introduced. Since then, no public cracks have become available. Dish Network uses Nagravision (2 and 3) encryption. The now-defunctVOOMandPrimeStarservices both used General Instruments/Motorola equipment, and thus used aDigiCipher 2-based system very similar to that of earlier 4DTV large dish satellite systems.[citation needed] InCanada, bothBell Satellite TVandShaw DirectDBS systems use digital encryption standards. Bell TV, like Dish Network, uses Nagravision for encryption. Shaw Direct, meanwhile, uses a DigiCipher 2-based system, due to their equipment also being sourced from General Instruments/Motorola. Zenith Electronicsdeveloped an encryption scheme for theirPhonevisionsystem of the 1950s and 1960s. Oak Orionwas originally used for analog satellite television pay channel access inCanada. It was innovative for its time as it useddigital audio. It has been completely replaced by digital encryption technologies. Oak Orion was used bySky Channelin Europe between the years 1982 and 1987, andM-Netin South Africa from 1986 to 2018. Oak developed related encryption systems for cable TV and broadcast pay TV services such asONTV. Leitch Viewguard is an analog encryption standard used primarily by broadcast TV networks inNorth America. Its method of scrambling is by re-ordering the lines of video (Line Shuffle), but leaves the audio intact. Terrestrial broadcast CATV systems in Northern Canada used this conditional access system for many years. It is only occasionally used today on some satellite circuits because of its similarity toD2-MACandB-MAC. There was also a version that encrypted the audio using a digital audio stream in the horizontal blanking interval like the VCII system. One US network used that for its affiliate feeds and would turn off the analog sub carriers on the satellite feed. B-MAC has not been used for DTH applications sincePrimeStarswitched to an all-digital delivery system in the mid-1990s. VideoCrypt was an analogue cut and rotate scrambling system with a smartcard based conditional access system. It was used in the 1990s by several European satellite broadcasters, mainlyBritish Sky Broadcasting. It was also used by Sky New Zealand (Sky-NZ). One version of Videocrypt (VideoCrypt-S) had the capability of scrambling sound. A soft encryption option was also available where the encrypted video could be transmitted with a fixed key and any VideoCrypt decoder could decode it. RITC Discret 11 is a system based on horizontal video line delay and audio scrambling. The start point of each line of video waspseudorandomlydelayed by either 0ns, 902 ns, or 1804 ns. First used in 1984 by French channelCanal Plus, it was widely compromised after the December 1984 issue of "Radio Plans" magazine printed decoder plans.[4]The BBC also used a later revision of the encryption system intended for use outside of France, Discret 12, in the late 1980s, as part of testing the use of off-air hours for encrypted specialist programming, with BMTV (British Medical Television) being broadcast on BBC Two.[5]This would ultimately lead to the launch of the scrambledBBC Selectservice in the early 1990s.[6] Used by European channel FilmNet, the SATPAC interfered with the horizontal and vertical synchronisation signals and transmitted a signal containing synchronisation and authorisation data on a separate subcarrier. The system was first used in September 1986 and saw many upgrades as it was easily compromised by pirates. By September 1992, FilmNet changed to D2-MAC EuroCrypt. Added an interferingsine waveof a frequency circa 93.750kHzto the video signal. This interfering signal was approximately six times the frequency of the horizontal refresh. It had an optional sound scrambling using Spectrum Inversion. Used in the UK by BBC for its world service broadcasts and by the now defunct UK movie channel "Premiere". Used by German/Swiss channel Teleclub in the early 1990s, this system employed various methods such as video inversion, modification of synchronisation signals, and a pseudo line delay effect. Conditional Access system using theD2-MACstandard. Developed mainly by France Telecom, the system was smartcard based. The encryption algorithm in the smartcard was based onDES. It was one of the first smart card based systems to be compromised. An older Nagravision system for scrambling analogue satellite and terrestrial television programs was used in the 1990s, for example by the German pay-TV broadcaster Premiere. In this line-shuffling system, 32 lines of the PAL TV signal are temporarily stored in both the encoder and decoder and read out in permuted order under the control of apseudorandom number generator. A smartcard security microcontroller (in a key-shaped package) decrypts data that is transmitted during the blanking intervals of the TV signal and extracts the random seed value needed for controlling the random number generation. The system also permitted the audio signal to be scrambled by inverting its spectrum at 12.8 kHz using a frequency mixer.
https://en.wikipedia.org/wiki/Television_encryption
Incomputer science,garbage collection(GC) is a form of automaticmemory management.[2]Thegarbage collectorattempts to reclaim memory that was allocated by the program, but is no longer referenced; such memory is calledgarbage. Garbage collection was invented by American computer scientistJohn McCarthyaround 1959 to simplify manual memory management inLisp.[3] Garbage collection relieves the programmer from doingmanual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so.[2]Other, similar techniques includestack allocation,region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affectperformanceas a result. Resources other than memory, such asnetwork sockets, databasehandles,windows,filedescriptors, and device descriptors, are not typically handled by garbage collection, but rather by othermethods(e.g.destructors). Some such methods de-allocate memory also. Manyprogramming languagesrequire garbage collection, either as part of thelanguage specification(e.g.,RPL,Java,C#,D,[4]Go, and mostscripting languages) or effectively for practical implementation (e.g., formal languages likelambda calculus).[5]These are said to begarbage-collected languages. Other languages, such asCandC++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, likeAda,Modula-3, andC++/CLI, allow both garbage collection andmanual memory managementto co-exist in the same application by using separateheapsfor collected and manually managed objects. Still others, likeD, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.[6] Although many languages integrate GC into theircompilerandruntime system,post-hocGC systems also exist, such asAutomatic Reference Counting(ARC). Some of thesepost-hocGC systems do not require recompilation.[7] GC frees the programmer from manually de-allocating memory. This helps avoid some kinds oferrors:[8] GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code isoverhead, which can impair program performance.[11]A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealized explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using anoracle, implemented by collecting traces from programs run under aprofiler, and the program is only correct for one particular execution of the program.[12]Interaction withmemory hierarchyeffects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection iniOS, despite it being the most desired feature.[13] The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout asession. Unpredictable stalls can be unacceptable inreal-time environments, intransaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs. Tracing garbage collectionis the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such asreference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects arereachableby a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics. Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.[14] As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either inCPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache andvirtual memoryoperation. There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms: Escape analysisis a compile-time technique that can convertheap allocationstostack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.[21] Generally speaking,higher-level programming languagesare more likely to have garbage collection as a standard feature. In some languages lacking built-in garbage collection, it can be added through a library, as with theBoehm garbage collectorfor C and C++. Mostfunctional programming languages, such asML,Haskell, andAPL, have garbage collection built in.Lispis especially notable as both the firstfunctional programming languageand the first language to introduce garbage collection.[22] Other dynamic languages, such asRubyandJulia(but notPerl5 orPHPbefore version 5.3,[23]which both use reference counting),JavaScriptandECMAScriptalso tend to use GC.Object-oriented programminglanguages such asSmalltalk,ooRexx,RPLandJavausually provide integrated garbage collection. Notable exceptions areC++andDelphi, which havedestructors. BASICandLogohave often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On theAltair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection.[24]Similarly theApplesoft BASICinterpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting inO(n2){\displaystyle O(n^{2})}performance[25]and pauses anywhere from a few seconds to a few minutes.[26]A replacement garbage collector for Applesoft BASIC byRandy Wiggintonidentifies a group of strings in every pass over the heap, reducing collection time dramatically.[27]BASIC.SYSTEM, released withProDOSin 1983, provides a windowing garbage collector for BASIC that is many times faster.[28] While theObjective-Ctraditionally had no garbage collection, with the release ofOS X 10.5in 2007Appleintroduced garbage collection forObjective-C2.0, using an in-house developed runtime collector.[29]However, with the 2012 release ofOS X 10.8, garbage collection was deprecated in favor ofLLVM'sautomatic reference counter(ARC) that was introduced withOS X 10.7.[30]Furthermore, since May 2015 Apple even forbade the usage of garbage collection for new OS X applications in theApp Store.[31][32]ForiOS, garbage collection has never been introduced due to problems in application responsivity and performance;[13][33]instead, iOS uses ARC.[34][35] Garbage collection is rarely used onembeddedor real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed.[36]The Microsoft.NET Micro Framework, .NET nanoFramework[37]andJava Platform, Micro Editionare embedded software platforms that, like their larger cousins, include garbage collection. Garbage collectors available inJavaOpenJDKsvirtual machine (JVM) include: Compile-time garbage collection is a form ofstatic analysisallowing memory to be reused and reclaimed based on invariants known during compilation. This form of garbage collection has been studied in theMercury programming language,[39]and it saw greater usage with the introduction ofLLVM'sautomatic reference counter(ARC) into Apple's ecosystem (iOS and OS X) in 2011.[34][35][31] Incremental, concurrent, and real-time garbage collectors have been developed, for example byHenry Bakerand byHenry Lieberman.[40][41][42] In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.[43] Generational garbage collectionschemes are based on the empirical observation that most objects die young. In generational garbage collection, two or more allocation regions (generations) are kept, which are kept separate based on the object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed. Somehigh-level language computer architecturesinclude hardware support for real-time garbage collection. Most implementations of real-time garbage collectors usetracing.[citation needed]Such real-time garbage collectors meethard real-timeconstraints when used with a real-time operating system.[44]
https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
Thecalculus of variations(orvariational calculus) is a field ofmathematical analysisthat uses variations, which are small changes infunctionsandfunctionals, to find maxima and minima of functionals:mappingsfrom a set offunctionsto thereal numbers.[a]Functionals are often expressed asdefinite integralsinvolving functions and theirderivatives. Functions that maximize or minimize functionals may be found using theEuler–Lagrange equationof the calculus of variations. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is astraight linebetween the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known asgeodesics. A related problem is posed byFermat's principle: light follows the path of shortestoptical lengthconnecting two points, which depends upon the material of the medium. One corresponding concept inmechanicsis theprinciple of least/stationary action. Many important problems involve functions of several variables. Solutions ofboundary value problemsfor theLaplace equationsatisfy theDirichlet's principle.Plateau's problemrequires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivialtopology. The calculus of variations began with the work ofIsaac Newton, such as withNewton's minimal resistance problem, which he formulated and solved in 1685, and later published in hisPrincipiain 1687,[2]which was the first problem in the field to be formulated and correctly solved,[2]and was also one of the most difficult problems tackled by variational methods prior to the twentieth century.[3][4][5]This problem was followed by thebrachistochrone curveproblem raised byJohann Bernoulli(1696),[6]which was similar to one raised byGalileo Galileiin 1638, but he did not solve the problem explicity nor did he use the methods based on calculus.[3]Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems.[4]The problem would immediately occupy the attention ofJacob Bernoulliand theMarquis de l'Hôpital, butLeonhard Eulerfirst elaborated the subject, beginning in 1733.Joseph-Louis Lagrangewas influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject thecalculus of variationsin his 1756 lectureElementa Calculi Variationum.[7][8][b] Adrien-Marie Legendre(1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima.Isaac NewtonandGottfried Leibnizalso gave some early attention to the subject.[9]To this discriminationVincenzo Brunacci(1810),Carl Friedrich Gauss(1829),Siméon Poisson(1831),Mikhail Ostrogradsky(1834), andCarl Jacobi(1837) have been among the contributors. An important general work is that ofPierre Frédéric Sarrus(1842) which was condensed and improved byAugustin-Louis Cauchy(1844). Other valuable treatises and memoirs have been written byStrauch[which?](1849),John Hewitt Jellett(1850),Otto Hesse(1857),Alfred Clebsch(1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that ofKarl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The20thand the23rdHilbert problempublished in 1900 encouraged further development.[9] In the 20th centuryDavid Hilbert,Oskar Bolza,Gilbert Ames Bliss,Emmy Noether,Leonida Tonelli,Henri LebesgueandJacques Hadamardamong others made significant contributions.[9]Marston Morseapplied calculus of variations in what is now calledMorse theory.[10]Lev Pontryagin,Ralph Rockafellarand F. H. Clarke developed new mathematical tools for the calculus of variations inoptimal control theory.[10]Thedynamic programmingofRichard Bellmanis an alternative to the calculus of variations.[11][12][13][c] The calculus of variations is concerned with the maxima or minima (collectively calledextrema) of functionals. A functional mapsfunctionstoscalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elementsy{\displaystyle y}of a givenfunction spacedefined over a givendomain. A functionalJ[y]{\displaystyle J[y]}is said to have an extremum at the functionf{\displaystyle f}ifΔJ=J[y]−J[f]{\displaystyle \Delta J=J[y]-J[f]}has the samesignfor ally{\displaystyle y}in an arbitrarily small neighborhood off.{\displaystyle f.}[d]The functionf{\displaystyle f}is called anextremalfunction or extremal.[e]The extremumJ[f]{\displaystyle J[f]}is called a local maximum ifΔJ≤0{\displaystyle \Delta J\leq 0}everywhere in an arbitrarily small neighborhood off,{\displaystyle f,}and a local minimum ifΔJ≥0{\displaystyle \Delta J\geq 0}there. For a function space of continuous functions, extrema of corresponding functionals are calledstrong extremaorweak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.[15] Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but theconversemay not hold. Finding strong extrema is more difficult than finding weak extrema.[16]An example of anecessary conditionthat is used for finding weak extrema is theEuler–Lagrange equation.[17][f] Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which thefunctional derivativeis equal to zero. This leads to solving the associatedEuler–Lagrange equation.[g] Consider the functionalJ[y]=∫x1x2L(x,y(x),y′(x))dx.{\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx\,.}where If the functionalJ[y]{\displaystyle J[y]}attains alocal minimumatf,{\displaystyle f,}andη(x){\displaystyle \eta (x)}is an arbitrary function that has at least one derivative and vanishes at the endpointsx1{\displaystyle x_{1}}andx2,{\displaystyle x_{2},}then for any numberε{\displaystyle \varepsilon }close to 0,J[f]≤J[f+εη].{\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The termεη{\displaystyle \varepsilon \eta }is called thevariationof the functionf{\displaystyle f}and is denoted byδf.{\displaystyle \delta f.}[1][h] Substitutingf+εη{\displaystyle f+\varepsilon \eta }fory{\displaystyle y}in the functionalJ[y],{\displaystyle J[y],}the result is a function ofε,{\displaystyle \varepsilon ,} Φ(ε)=J[f+εη].{\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.}Since the functionalJ[y]{\displaystyle J[y]}has a minimum fory=f{\displaystyle y=f}the functionΦ(ε){\displaystyle \Phi (\varepsilon )}has a minimum atε=0{\displaystyle \varepsilon =0}and thus,[i]Φ′(0)≡dΦdε|ε=0=∫x1x2dLdε|ε=0dx=0.{\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking thetotal derivativeofL[x,y,y′],{\displaystyle L\left[x,y,y'\right],}wherey=f+εη{\displaystyle y=f+\varepsilon \eta }andy′=f′+εη′{\displaystyle y'=f'+\varepsilon \eta '}are considered as functions ofε{\displaystyle \varepsilon }rather thanx,{\displaystyle x,}yieldsdLdε=∂L∂ydydε+∂L∂y′dy′dε{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}}and becausedydε=η{\displaystyle {\frac {dy}{d\varepsilon }}=\eta }anddy′dε=η′,{\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',}dLdε=∂L∂yη+∂L∂y′η′.{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.} Therefore,∫x1x2dLdε|ε=0dx=∫x1x2(∂L∂fη+∂L∂f′η′)dx=∫x1x2∂L∂fηdx+∂L∂f′η|x1x2−∫x1x2ηddx∂L∂f′dx=∫x1x2(∂L∂fη−ηddx∂L∂f′)dx{\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}}whereL[x,y,y′]→L[x,f,f′]{\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]}whenε=0{\displaystyle \varepsilon =0}and we have usedintegration by partson the second term. The second term on the second line vanishes becauseη=0{\displaystyle \eta =0}atx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}by definition. Also, as previously mentioned the left side of the equation is zero so that∫x1x2η(x)(∂L∂f−ddx∂L∂f′)dx=0.{\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to thefundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.∂L∂f−ddx∂L∂f′=0{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}which is called theEuler–Lagrange equation. The left hand side of this equation is called thefunctional derivativeofJ[f]{\displaystyle J[f]}and is denotedδJ{\displaystyle \delta J}orδf(x).{\displaystyle \delta f(x).} In general this gives a second-orderordinary differential equationwhich can be solved to obtain the extremal functionf(x).{\displaystyle f(x).}The Euler–Lagrange equation is anecessary, but notsufficient, condition for an extremumJ[f].{\displaystyle J[f].}A sufficient condition for a minimum is given in the sectionVariations and sufficient condition for a minimum. In order to illustrate this process, consider the problem of finding the extremal functiony=f(x),{\displaystyle y=f(x),}which is the shortest curve that connects two points(x1,y1){\displaystyle \left(x_{1},y_{1}\right)}and(x2,y2).{\displaystyle \left(x_{2},y_{2}\right).}Thearc lengthof the curve is given byA[y]=∫x1x21+[y′(x)]2dx,{\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,}withy′(x)=dydx,y1=f(x1),y2=f(x2).{\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.}Note that assumingyis a function ofxloses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes. The Euler–Lagrange equation will now be used to find the extremal functionf(x){\displaystyle f(x)}that minimizes the functionalA[y].{\displaystyle A[y].}∂L∂f−ddx∂L∂f′=0{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}withL=1+[f′(x)]2.{\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.} Sincef{\displaystyle f}does not appear explicitly inL,{\displaystyle L,}the first term in the Euler–Lagrange equation vanishes for allf(x){\displaystyle f(x)}and thus,ddx∂L∂f′=0.{\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.}Substituting forL{\displaystyle L}and taking the derivative,ddxf′(x)1+[f′(x)]2=0.{\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.} Thusf′(x)1+[f′(x)]2=c,{\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,}for some constantc.{\displaystyle c.}Then[f′(x)]21+[f′(x)]2=c2,{\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,}where0≤c2<1.{\displaystyle 0\leq c^{2}<1.}Solving, we get[f′(x)]2=c21−c2{\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}}which implies thatf′(x)=m{\displaystyle f'(x)=m}is a constant and therefore that the shortest curve that connects two points(x1,y1){\displaystyle \left(x_{1},y_{1}\right)}and(x2,y2){\displaystyle \left(x_{2},y_{2}\right)}isf(x)=mx+bwithm=y2−y1x2−x1andb=x2y1−x1y2x2−x1{\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}}and we have thus found the extremal functionf(x){\displaystyle f(x)}that minimizes the functionalA[y]{\displaystyle A[y]}so thatA[f]{\displaystyle A[f]}is a minimum. The equation for a straight line isy=mx+b.{\displaystyle y=mx+b.}In other words, the shortest distance between two points is a straight line.[j] In physics problems it may be the case that∂L∂x=0,{\displaystyle {\frac {\partial L}{\partial x}}=0,}meaning the integrand is a function off(x){\displaystyle f(x)}andf′(x){\displaystyle f'(x)}butx{\displaystyle x}does not appear separately. In that case, the Euler–Lagrange equation can be simplified to theBeltrami identity[20]L−f′∂L∂f′=C,{\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,}whereC{\displaystyle C}is a constant. The left hand side is theLegendre transformationofL{\displaystyle L}with respect tof′(x).{\displaystyle f'(x).} The intuition behind this result is that, if the variablex{\displaystyle x}is actually time, then the statement∂L∂x=0{\displaystyle {\frac {\partial L}{\partial x}}=0}implies that the Lagrangian is time-independent. ByNoether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity. IfS{\displaystyle S}depends on higher-derivatives ofy(x),{\displaystyle y(x),}that is, ifS=∫abf(x,y(x),y′(x),…,y(n)(x))dx,{\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,}theny{\displaystyle y}must satisfy the Euler–Poissonequation,[21]∂f∂y−ddx(∂f∂y′)+⋯+(−1)ndndxn[∂f∂y(n)]=0.{\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.} The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integralJ{\displaystyle J}requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as aweak formof the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. IfL{\displaystyle L}has continuous first and second derivatives with respect to all of its arguments, and if∂2L∂f′2≠0,{\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,}thenf{\displaystyle f}has two continuous derivatives, and it satisfies the Euler–Lagrange equation. Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior. HoweverLavrentievin 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:[22]L[x]=∫01(x3−t)2x′6,{\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},}A={x∈W1,1(0,1):x(0)=0,x(1)=1}.{\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.} Clearly,x(t)=t13{\displaystyle x(t)=t^{\frac {1}{3}}}minimizes the functional, but we find any functionx∈W1,∞{\displaystyle x\in W^{1,\infty }}gives a value bounded away from the infimum. Examples (in one-dimension) are traditionally manifested acrossW1,1{\displaystyle W^{1,1}}andW1,∞,{\displaystyle W^{1,\infty },}but Ball and Mizel[23]procured the first functional that displayed Lavrentiev's Phenomenon acrossW1,p{\displaystyle W^{1,p}}andW1,q{\displaystyle W^{1,q}}for1≤p<q<∞.{\displaystyle 1\leq p<q<\infty .}There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals. Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.[24] For example, ifφ(x,y){\displaystyle \varphi (x,y)}denotes the displacement of a membrane above the domainD{\displaystyle D}in thex,y{\displaystyle x,y}plane, then its potential energy is proportional to its surface area:U[φ]=∬D1+∇φ⋅∇φdxdy.{\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.}Plateau's problemconsists of finding a function that minimizes the surface area while assuming prescribed values on the boundary ofD{\displaystyle D}; the solutions are calledminimal surfaces. The Euler–Lagrange equation for this problem is nonlinear:φxx(1+φy2)+φyy(1+φx2)−2φxφyφxy=0.{\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.}See Courant (1950) for details. It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated byV[φ]=12∬D∇φ⋅∇φdxdy.{\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.}The functionalV{\displaystyle V}is to be minimized among all trial functionsφ{\displaystyle \varphi }that assume prescribed values on the boundary ofD.{\displaystyle D.}Ifu{\displaystyle u}is the minimizing function andv{\displaystyle v}is an arbitrary smooth function that vanishes on the boundary ofD,{\displaystyle D,}then the first variation ofV[u+εv]{\displaystyle V[u+\varepsilon v]}must vanish:ddεV[u+εv]|ε=0=∬D∇u⋅∇vdxdy=0.{\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.}Provided that u has two derivatives, we may apply the divergence theorem to obtain∬D∇⋅(v∇u)dxdy=∬D∇u⋅∇v+v∇⋅∇udxdy=∫Cv∂u∂nds,{\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,}whereC{\displaystyle C}is the boundary ofD,{\displaystyle D,}s{\displaystyle s}is arclength alongC{\displaystyle C}and∂u/∂n{\displaystyle \partial u/\partial n}is the normal derivative ofu{\displaystyle u}onC.{\displaystyle C.}Sincev{\displaystyle v}vanishes onC{\displaystyle C}and the first variation vanishes, the result is∬Dv∇⋅∇udxdy=0{\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0}for all smooth functionsv{\displaystyle v}that vanish on the boundary ofD.{\displaystyle D.}The proof for the case of one dimensional integrals may be adapted to this case to show that∇⋅∇u=0{\displaystyle \nabla \cdot \nabla u=0}inD.{\displaystyle D.} The difficulty with this reasoning is the assumption that the minimizing functionu{\displaystyle u}must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea theDirichlet principlein honor of his teacherPeter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimizeW[φ]=∫−11(xφ′)2dx{\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx}among all functionsφ{\displaystyle \varphi }that satisfyφ(−1)=−1{\displaystyle \varphi (-1)=-1}andφ(1)=1.{\displaystyle \varphi (1)=1.}W{\displaystyle W}can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makesW=0.{\displaystyle W=0.}[k]Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory forelliptic partial differential equations; see Jost and Li–Jost (1998). A more general expression for the potential energy of a membrane isV[φ]=∬D[12∇φ⋅∇φ+f(x,y)φ]dxdy+∫C[12σ(s)φ2+g(s)φ]ds.{\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.}This corresponds to an external force densityf(x,y){\displaystyle f(x,y)}inD,{\displaystyle D,}an external forceg(s){\displaystyle g(s)}on the boundaryC,{\displaystyle C,}and elastic forces with modulusσ(s){\displaystyle \sigma (s)}acting onC.{\displaystyle C.}The function that minimizes the potential energywith no restriction on its boundary valueswill be denoted byu.{\displaystyle u.}Provided thatf{\displaystyle f}andg{\displaystyle g}are continuous, regularity theory implies that the minimizing functionu{\displaystyle u}will have two derivatives. In taking the first variation, no boundary condition need be imposed on the incrementv.{\displaystyle v.}The first variation ofV[u+εv]{\displaystyle V[u+\varepsilon v]}is given by∬D[∇u⋅∇v+fv]dxdy+∫C[σuv+gv]ds=0.{\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.}If we apply the divergence theorem, the result is∬D[−v∇⋅∇u+vf]dxdy+∫Cv[∂u∂n+σu+g]ds=0.{\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.}If we first setv=0{\displaystyle v=0}onC,{\displaystyle C,}the boundary integral vanishes, and we conclude as before that−∇⋅∇u+f=0{\displaystyle -\nabla \cdot \nabla u+f=0}inD.{\displaystyle D.}Then if we allowv{\displaystyle v}to assume arbitrary boundary values, this implies thatu{\displaystyle u}must satisfy the boundary condition∂u∂n+σu+g=0,{\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,}onC.{\displaystyle C.}This boundary condition is a consequence of the minimizing property ofu{\displaystyle u}: it is not imposed beforehand. Such conditions are callednatural boundary conditions. The preceding reasoning is not valid ifσ{\displaystyle \sigma }vanishes identically onC.{\displaystyle C.}In such a case, we could allow a trial functionφ≡c,{\displaystyle \varphi \equiv c,}wherec{\displaystyle c}is a constant. For such a trial function,V[c]=c[∬Dfdxdy+∫Cgds].{\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].}By appropriate choice ofc,{\displaystyle c,}V{\displaystyle V}can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless∬Dfdxdy+∫Cgds=0.{\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.}This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953). Both one-dimensional and multi-dimensionaleigenvalue problemscan be formulated as variational problems. The Sturm–Liouvilleeigenvalue probleminvolves a general quadratic formQ[y]=∫x1x2[p(x)y′(x)2+q(x)y(x)2]dx,{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,}wherey{\displaystyle y}is restricted to functions that satisfy the boundary conditionsy(x1)=0,y(x2)=0.{\displaystyle y(x_{1})=0,\quad y(x_{2})=0.}LetR{\displaystyle R}be a normalization integralR[y]=∫x1x2r(x)y(x)2dx.{\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.}The functionsp(x){\displaystyle p(x)}andr(x){\displaystyle r(x)}are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratioQ/R{\displaystyle Q/R}among ally{\displaystyle y}satisfying the endpoint conditions, which is equivalent to minimizingQ[y]{\displaystyle Q[y]}under the constraint thatR[y]{\displaystyle R[y]}is constant. It is shown below that the Euler–Lagrange equation for the minimizingu{\displaystyle u}is−(pu′)′+qu−λru=0,{\displaystyle -(pu')'+qu-\lambda ru=0,}whereλ{\displaystyle \lambda }is the quotientλ=Q[u]R[u].{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}It can be shown (see Gelfand and Fomin 1963) that the minimizingu{\displaystyle u}has two derivatives and satisfies the Euler–Lagrange equation. The associatedλ{\displaystyle \lambda }will be denoted byλ1{\displaystyle \lambda _{1}}; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted byu1(x).{\displaystyle u_{1}(x).}This variational characterization of eigenvalues leads to theRayleigh–Ritz method: choose an approximatingu{\displaystyle u}as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizingQ{\displaystyle Q}under the additional constraint∫x1x2r(x)u1(x)y(x)dx=0.{\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.}This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. Instead of requiring thaty{\displaystyle y}vanish at the endpoints, we may not impose any condition at the endpoints, and setQ[y]=∫x1x2[p(x)y′(x)2+q(x)y(x)2]dx+a1y(x1)2+a2y(x2)2,{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},}wherea1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}are arbitrary. If we sety=u+εv{\displaystyle y=u+\varepsilon v}, the first variation for the ratioQ/R{\displaystyle Q/R}isV1=2R[u](∫x1x2[p(x)u′(x)v′(x)+q(x)u(x)v(x)−λr(x)u(x)v(x)]dx+a1u(x1)v(x1)+a2u(x2)v(x2)),{\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),}where λ is given by the ratioQ[u]/R[u]{\displaystyle Q[u]/R[u]}as previously. After integration by parts,R[u]2V1=∫x1x2v(x)[−(pu′)′+qu−λru]dx+v(x1)[−p(x1)u′(x1)+a1u(x1)]+v(x2)[p(x2)u′(x2)+a2u(x2)].{\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].}If we first require thatv{\displaystyle v}vanish at the endpoints, the first variation will vanish for all suchv{\displaystyle v}only if−(pu′)′+qu−λru=0forx1<x<x2.{\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.}Ifu{\displaystyle u}satisfies this condition, then the first variation will vanish for arbitraryv{\displaystyle v}only if−p(x1)u′(x1)+a1u(x1)=0,andp(x2)u′(x2)+a2u(x2)=0.{\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.}These latter conditions are thenatural boundary conditionsfor this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domainD{\displaystyle D}with boundaryB{\displaystyle B}in three dimensions we may defineQ[φ]=∭Dp(X)∇φ⋅∇φ+q(X)φ2dxdydz+∬Bσ(S)φ2dS,{\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,}andR[φ]=∭Dr(X)φ(X)2dxdydz.{\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.}Letu{\displaystyle u}be the function that minimizes the quotientQ[φ]/R[φ],{\displaystyle Q[\varphi ]/R[\varphi ],}with no condition prescribed on the boundaryB.{\displaystyle B.}The Euler–Lagrange equation satisfied byu{\displaystyle u}is−∇⋅(p(X)∇u)+q(x)u−λr(x)u=0,{\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,}whereλ=Q[u]R[u].{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}The minimizingu{\displaystyle u}must also satisfy the natural boundary conditionp(S)∂u∂n+σ(S)u=0,{\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,}on the boundaryB.{\displaystyle B.}This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953). Fermat's principlestates that light takes a path that (locally) minimizes the optical length between its endpoints. If thex{\displaystyle x}-coordinate is chosen as the parameter along the path, andy=f(x){\displaystyle y=f(x)}along the path, then the optical length is given byA[f]=∫x0x1n(x,f(x))1+f′(x)2dx,{\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,}where the refractive indexn(x,y){\displaystyle n(x,y)}depends upon the material. If we tryf(x)=f0(x)+εf1(x){\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)}then thefirst variationofA{\displaystyle A}(the derivative ofA{\displaystyle A}with respect to ε) isδA[f0,f1]=∫x0x1[n(x,f0)f0′(x)f1′(x)1+f0′(x)2+ny(x,f0)f11+f0′(x)2]dx.{\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.} After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation−ddx[n(x,f0)f0′1+f0′2]+ny(x,f0)1+f0′(x)2=0.{\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.} The light rays may be determined by integrating this equation. This formalism is used in the context ofLagrangian opticsandHamiltonian optics. There is a discontinuity of the refractive index when light enters or leaves a lens. Letn(x,y)={n(−)ifx<0,n(+)ifx>0,{\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}}wheren(−){\displaystyle n_{(-)}}andn(+){\displaystyle n_{(+)}}are constants. Then the Euler–Lagrange equation holds as before in the region wherex<0{\displaystyle x<0}orx>0,{\displaystyle x>0,}and in fact the path is a straight line there, since the refractive index is constant. At thex=0,{\displaystyle x=0,}f{\displaystyle f}must be continuous, butf′{\displaystyle f'}may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the formδA[f0,f1]=f1(0)[n(−)f0′(0−)1+f0′(0−)2−n(+)f0′(0+)1+f0′(0+)2].{\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].} The factor multiplyingn(−){\displaystyle n_{(-)}}is the sine of angle of the incident ray with thex{\displaystyle x}axis, and the factor multiplyingn(+){\displaystyle n_{(+)}}is the sine of angle of the refracted ray with thex{\displaystyle x}axis.Snell's lawfor refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. It is expedient to use vector notation: letX=(x1,x2,x3),{\displaystyle X=(x_{1},x_{2},x_{3}),}lett{\displaystyle t}be a parameter, letX(t){\displaystyle X(t)}be the parametric representation of a curveC,{\displaystyle C,}and letX˙(t){\displaystyle {\dot {X}}(t)}be its tangent vector. The optical length of the curve is given byA[C]=∫t0t1n(X)X˙⋅X˙dt.{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.} Note that this integral is invariant with respect to changes in the parametric representation ofC.{\displaystyle C.}The Euler–Lagrange equations for a minimizing curve have the symmetric formddtP=X˙⋅X˙∇n,{\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,}whereP=n(X)X˙X˙⋅X˙.{\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.} It follows from the definition thatP{\displaystyle P}satisfiesP⋅P=n(X)2.{\displaystyle P\cdot P=n(X)^{2}.} Therefore, the integral may also be written asA[C]=∫t0t1P⋅X˙dt.{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.} This form suggests that if we can find a functionψ{\displaystyle \psi }whose gradient is given byP,{\displaystyle P,}then the integralA{\displaystyle A}is given by the difference ofψ{\displaystyle \psi }at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces ofψ.{\displaystyle \psi .}In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context ofLagrangian opticsandHamiltonian optics. Thewave equationfor an inhomogeneous medium isutt=c2∇⋅∇u,{\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,}wherec{\displaystyle c}is the velocity, which generally depends uponX.{\displaystyle X.}Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfyφt2=c(X)2∇φ⋅∇φ.{\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .} We may look for solutions in the formφ(t,X)=t−ψ(X).{\displaystyle \varphi (t,X)=t-\psi (X).} In that case,ψ{\displaystyle \psi }satisfies∇ψ⋅∇ψ=n2,{\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},}wheren=1/c.{\displaystyle n=1/c.}According to the theory offirst-order partial differential equations, ifP=∇ψ,{\displaystyle P=\nabla \psi ,}thenP{\displaystyle P}satisfiesdPds=n∇n,{\displaystyle {\frac {dP}{ds}}=n\,\nabla n,}along a system of curves (the light rays) that are given bydXds=P.{\displaystyle {\frac {dX}{ds}}=P.} These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identificationdsdt=X˙⋅X˙n.{\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.} We conclude that the functionψ{\displaystyle \psi }is the value of the minimizing integralA{\displaystyle A}as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of theHamilton–Jacobi theory, which applies to more general variational problems. In classical mechanics, the action,S,{\displaystyle S,}is defined as the time integral of the Lagrangian,L.{\displaystyle L.}The Lagrangian is the difference of energies,L=T−U,{\displaystyle L=T-U,}whereT{\displaystyle T}is thekinetic energyof a mechanical system andU{\displaystyle U}itspotential energy.Hamilton's principle(or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integralS=∫t0t1L(x,x˙,t)dt{\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt}is stationary with respect to variations in the pathx(t).{\displaystyle x(t).}The Euler–Lagrange equations for this system are known as Lagrange's equations:ddt∂L∂x˙=∂L∂x,{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},}and they are equivalent to Newton's equations of motion (for such systems). The conjugate momentaP{\displaystyle P}are defined byp=∂L∂x˙.{\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.}For example, ifT=12mx˙2,{\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},}thenp=mx˙.{\displaystyle p=m{\dot {x}}.}Hamiltonian mechanicsresults if the conjugate momenta are introduced in place ofx˙{\displaystyle {\dot {x}}}by a Legendre transformation of the LagrangianL{\displaystyle L}into the HamiltonianH{\displaystyle H}defined byH(x,p,t)=px˙−L(x,x˙,t).{\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).}The Hamiltonian is the total energy of the system:H=T+U.{\displaystyle H=T+U.}Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function ofX.{\displaystyle X.}This function is a solution of theHamilton–Jacobi equation:∂ψ∂t+H(x,∂ψ∂x,t)=0.{\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.} Further applications of the calculus of variations include the following: Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. Thefirst variation[l]is defined as the linear part of the change in the functional, and thesecond variation[m]is defined as the quadratic part.[26] For example, ifJ[y]{\displaystyle J[y]}is a functional with the functiony=y(x){\displaystyle y=y(x)}as its argument, and there is a small change in its argument fromy{\displaystyle y}toy+h,{\displaystyle y+h,}whereh=h(x){\displaystyle h=h(x)}is a function in the same function space asy,{\displaystyle y,}then the corresponding change in the functional is[n]ΔJ[h]=J[y+h]−J[y].{\displaystyle \Delta J[h]=J[y+h]-J[y].} The functionalJ[y]{\displaystyle J[y]}is said to bedifferentiableifΔJ[h]=φ[h]+ε‖h‖,{\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,}whereφ[h]{\displaystyle \varphi [h]}is a linear functional,[o]‖h‖{\displaystyle \|h\|}is the norm ofh,{\displaystyle h,}[p]andε→0{\displaystyle \varepsilon \to 0}as‖h‖→0.{\displaystyle \|h\|\to 0.}The linear functionalφ[h]{\displaystyle \varphi [h]}is the first variation ofJ[y]{\displaystyle J[y]}and is denoted by,[30]δJ[h]=φ[h].{\displaystyle \delta J[h]=\varphi [h].} The functionalJ[y]{\displaystyle J[y]}is said to betwice differentiableifΔJ[h]=φ1[h]+φ2[h]+ε‖h‖2,{\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},}whereφ1[h]{\displaystyle \varphi _{1}[h]}is a linear functional (the first variation),φ2[h]{\displaystyle \varphi _{2}[h]}is a quadratic functional,[q]andε→0{\displaystyle \varepsilon \to 0}as‖h‖→0.{\displaystyle \|h\|\to 0.}The quadratic functionalφ2[h]{\displaystyle \varphi _{2}[h]}is the second variation ofJ[y]{\displaystyle J[y]}and is denoted by,[32]δ2J[h]=φ2[h].{\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].} The second variationδ2J[h]{\displaystyle \delta ^{2}J[h]}is said to bestrongly positiveifδ2J[h]≥k‖h‖2,{\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},}for allh{\displaystyle h}and for some constantk>0{\displaystyle k>0}.[33] Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated. Sufficient condition for a minimum:
https://en.wikipedia.org/wiki/Calculus_of_variations
Strategic planningis the activity undertaken by anorganizationthrough which it seeks to define its future direction and makesdecisionssuch as resource allocation aimed at achieving its intended goals. "Strategy" has many definitions, but it generally involves setting major goals, determining actions to achieve these goals, setting atimeline, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources) in a given span of time. Often, Strategic planning is long term and organizational action steps are established from two to five years in the future.[1]Strategy can be planned ("intended") or can be observed as a pattern of activity ("emergent") as the organization adapts to its environment or competes in the market. The senior leadership of an organization is generally tasked with determining strategy. It is executed by strategic planners orstrategists, who involve many parties and research sources in their analysis of the organization and its relationship to the environment in which it competes.[2] Strategy includes processes of formulation andimplementation; strategic planning helps coordinate both. However, strategic planning is analytical in nature (i.e., it involves "finding the dots"); strategy formation itself involves synthesis (i.e., "connecting the dots") viastrategic thinking. As such, strategic planning occurs around the strategy formation activity.[2] Strategic planning became prominent in corporations during the 1960s and remains an important aspect ofstrategic management. McKinsey & Companydeveloped acapability maturity modelin the 1970s to describe the sophistication of planning processes, with strategic management ranked the highest. The four stages include: Categories 3 and 4 are strategic planning, while the first two categories are non-strategic or essentially financial planning. Each stage builds on the previous stages; that is, a stage 4 organization completes activities in all four categories.[3] In 1993, PresidentBill Clintonsigned into law theGovernment Performance and Results Act, which requiredUS federal agenciesto develop strategic plans for how they would deliver high quality products and services to the American people.[4] In the business sector, McKinsey research undertaken and published in 2006 found that, although many companies had a formal strategic-planning process, the process was not being used for their "most important decisions".[5] For Michael C. Sekora,Project Socratesfounder in theReagan White House, during theCold Warthe economically challengedSoviet Unionwas able to keep on western military capabilities by using technology-based planning while the U.S. was slowed by finance-based planning, until the Reagan administration launched the Socrates Project, which should be revived to keep up withChina as an emerging superpower.[6] Strategic planning is a process and thus has inputs, activities, outputs and outcomes. This process, like all processes, has constraints. It may be formal or informal and is typically iterative, with feedback loops throughout the process. Some elements of the process may be continuous and others may be executed as discrete projects with a definitive start and end during a period. Strategic planning provides inputs forstrategic thinking: these are best seen as distinct but complementary activities.[7]Strategic thinking guides the actual strategy formation. Typical strategic planning efforts include the evaluation of the organization's mission and strategic issues to strengthen current practices and determine the need for new programming.[8]The end result is the organization's strategy, including a diagnosis of the environment and competitive situation, a guiding policy on what the organization intends to accomplish, and key initiatives or action plans for achieving the guiding policy.[9] Michael Porterwrote in 1980 that formulation of competitive strategy includes consideration of four key elements: The first two elements relate to factors internal to the company (i.e., the internal environment), while the latter two relate to factors external to the company (i.e., the external environment).[10]These elements are considered throughout the strategic planning process. Data is gathered from various sources, such as interviews with key executives, review of publicly available documents on the competition or market, primary research (e.g., visiting or observing competitor places of business or comparing prices), industry studies, reports of the organization's performance, etc. This may be part of acompetitive intelligenceprogram. Inputs are gathered to help establish a baseline, support an understanding of the competitive environment and its opportunities and risks. Other inputs include an understanding of the values of key stakeholders, such as the board, shareholders, and senior management. These values may be captured in an organization'svisionandmissionstatements. The essence of formulating competitive strategy is relating a company to its environment. Strategic planning activities include meetings and other communication among the organization's leaders and personnel to develop a common understanding regarding the competitive environment and what the organization's response to that environment should be. A variety of strategic planning tools may be completed as part of strategic planning activities. The organization's leaders may have a series of questions they want to be answered in formulating the strategy and gathering inputs.[2][11] The output of strategic planning includes documentation and communication describing the organization's strategy and how it should be implemented, sometimes referred to as the strategic plan.[12]The strategy may include a diagnosis of the competitive situation, a guiding policy for achieving the organization's goals, and specific action plans to be implemented.[9]A strategic plan may cover multiple years and be updated periodically. The organization may use a variety of methods of measuring and monitoring progress towards thestrategic objectivesand measures established, such as abalanced scorecardorstrategy map. Organizations may also plan their financial statements (i.e., balance sheets, income statements, and cash flows) for several years when developing their strategic plan, as part of the goal-setting activity. The termoperationalbudgetis often used to describe the expected financial performance of an organization for the upcoming year. Capital budgets very often form the backbone of a strategic plan, especially as it increasingly relates to Information and Communications Technology (ICT). While the planning process produces outputs,strategy implementationor execution of the strategic plan produces outcomes. These outcomes will invariably differ from the strategic goals. How close they are to the strategic goals and vision will determine the success or failure of the strategic plan. Unintended outcomes might also be an issue. They need to be attended to and understood for strategy development and execution to be a true learning process. A variety of analytical tools and techniques are used in strategic planning.[2]These were developed by companies and management consulting firms to help provide a framework for strategic planning. Such tools include: Strategic planning can be used inproject managementwith a focus on the development of a standard repeatable methodology adding to the likelihood of achieving project objectives. This requires a lot of thinking process and interaction among stakeholders. Strategic planning in Project Management provides an organization the framework and consistency of action. In addition, it ensures communication of overall goals and understanding roles of teams or individual to achieve them. The commitment of top management must be evident throughout the process to reduce resistance to change, ensure acceptance, and avoid common pitfalls. Strategic planning does not guarantee success but will help improve likelihood of success of an organization.[14] strategic planning is also desirable withineducational institutions. We are already in a transitional period in which old practices are no longer permanent but require revision to meet the needs of academia, which is frustrating in the educational sector. To meet the changing needs of this new society, educational institutions must reorganize.[15]Finding ways to maintain achievements while improving effectiveness can be difficult for educational institutions. Keeping up with society's rapid changes. Some strategic planners are hesitant to address societal outcomes, so they often ignore them and assume they will happen on their own. Instead of defining the vision for how we want our children to live, they direct their attention to courses, content, and resources with the mistaken belief that societally useful outcomes will follow. When this occurs, the true strategic plan is never developed or implemented.[16] Simply extending financial statement projections into the future without consideration of the competitive environment is a form offinancial planningorbudgeting, not strategic planning. In business, the term "financial plan" is often used to describe the expected financial performance of an organization for future periods. The term "budget" is used for a financial plan for the upcoming year. A "forecast" is typically a combination of actual performance year-to-date plus expected performance for the remainder of the year, so is generally compared against plan or budget and prior performance. The financial plans accompanying a strategic plan may include three–five years of projected performance. Strategic planning has been criticized for attempting to systematizestrategic thinkingand strategy formation, whichHenry Mintzbergargues are inherently creative activities involving synthesis or "connecting the dots" which cannot be systematized. Mintzberg argues that strategic planning can help coordinate planning efforts and measure progress on strategic goals, but that it occurs "around" the strategy formation process rather than within it. It functions remote from the "front lines" or contact with the competitive environment (i.e., in business, facing the customer where the effect of competition is most clearly evident) may not be effective at supporting strategy efforts.[2] While much criticism surrounds strategic planning, evidence suggests that it does work. In a 2019 meta-analysis including data from almost 9,000 public and private organizations, strategic planning is found to have a positive impact on organizational performance. Strategic planning is particularly potent in enhancing an organization's capacity to achieve its goals (i.e., effectiveness). However, the study argues that just having a plan is not enough. For strategic planning to work, it needs to include some formality (i.e., including an analysis of the internal and external environment and the stipulation of strategies, goals and plans based on these analyses), comprehensiveness (i.e., producing many strategic options before selecting the course to follow) and careful stakeholder management (i.e., thinking carefully about whom to involve during the different steps of the strategic planning process, how, when and why).[17] Henry Mintzbergin the article "The Fall and Rise of Strategic Planning" (1994),[18]argued that the lesson that should be accepted is that managers will never be able to take charge of strategic planning through a formalized process. Therefore, he underscored the role of plans as tools to communicate and control. It ensures that there is coordination wherein everyone in the organization is moving in the same direction. The plans are the prime media communicating the management's strategic intentions, thereby promoting a common direction instead of individual discretion. It is also the tool to secure the support of the organization's external sphere, such as financiers, suppliers or government agencies, who are helping achieve the organization's plans and goals.[18] Cornutet al(2012)[19]studied the particular features of the strategic plan genre of communication by examining a corpus of strategic plans from public and non-profit organizations. They defined strategic plans as the "key material manifestation" of organizations' strategies and argued that, even though strategic plans are specific to an organization, there is a generic quality that draws on shared institutional understanding on the substance, form and communicative purposes of the strategic plan. Hence, they posit that strategic plan is a genre of organizational communication (Bhatia, 2004; Yates and Orlikowski, 1992 as cited in Cornutet al., 2012).[19]In this sense, genre is defined as the "conventionalized discursive actions in which participating individuals or institutions have shared perceptions of communicative purposes as well as those of constraints operating their construction, interpretation and conditions of use"  (Bhatia, 2004: 87; see also Frow, 2005; Swales, 1990 as cited in Cornutet al., 2012).[19] The authors compared the corpus of strategic plans with nine other corpora. This included annual reports from the public sector and nongovernment organizations, research articles, project plans, executive speeches, State of the Union addresses, horoscopes, religious sermons, business magazine articles and annual reports for-profit corporations included in the Standard & Poor's 500 largest companies (S&P 500). The authors used textual analysis, includingcontent analysisandcorpus linguistics.Content analysiswas used to identify themes and concepts, such as values and cognition; whilecorpus linguisticswas used to identify naturally occurring texts and patterns (Biberet al., 1998 as cited in Cornutet al., 2012).[19] The strategic plans showed significantly less self-reference than all other corpora, with the exemption of project plans and S&P 500 annual reports. The results indicated that strategic plans have more moderate verbs of deontic value. This was interpreted as an indication that "commands and commitments are not overtly hedged, but neither are they particularly strong". Guidance on the sections of a strategic plan abound but there are few studies about the nature of language used for these documents. Cornutet al.'s (2012)[19]study showed that writers of strategic plans have a shared understanding of what is the appropriate language. Thus, the authors argued, a true strategist is one who is able to instantiate the genre strategic plan through appropriate application of language.[19] Speeet. al. (2011)[20]explored the strategic planning as communicative process based on Ricoeur's concepts of decontextualization and recontextualization, they conceptualize strategic planning activities as being constituted through the iterative and recursive relationship of talk and text, this elaborate the construction of a strategic plan as a communicative process. This study looks at the way that texts within the planning process, such asPowerPointpresentations, planning documents and targets that are part of a strategic plan, are constructed in preparation, through a series of communicative interface. Throughout the process, strategy documents were essential in detaining the developing strategy as they were constantly revised up until an ultimate plan was accepted. A book edited by Mandeville-Gamble (2015) sees the roles of managers as important in terms of communicating the strategic vision of the organization.[21]Many of the authors in the book by Mandeville-Gamble agree that a strategic plan is merely an unrealized vision unless it is widely shared and sparks the willingness to change within individuals in the organization. Similarly, Goodman in 2017[22]emphasized that the advent of the internet and social media has become one of the most important vehicle to which corporate strategic plan can be distributed to an organizations internal and external stakeholders. This distribution of knowledge allows for staff of organization to access and share the institutional thinking this able to reformulate it in their own words. Strategic planning through control mechanisms (mostly by the way of a communication program) is set in the hopes of coming to desired outcomes that reflect company or organizational goals. As further supplement to this idea, controls can also be realized in both measurable and intangible controls, specifically output controls, behavioural controls, and clan controls. By way of simple definition, output controls work toward to tangible and quantifiable results; behavioural controls are geared toward behaviours of people in an organization; and clan controls are dependent and are executed while keeping in mind norms, traditions, and organizational culture. All these three are implemented in order to keep systems and strategies running and focused toward desired results (n.d.). Strategic planning is both the impetus for and result of critical thinking, optimization, and motivation for the growth and development of organizations. The core disciplines, which are inherent insystems thinking, personal and organizational mastery, mental models, building a shared vision, and team learning. In a time of machine learning and data analytics, these core disciplines remain to be relevant in so far as having human resource and human interest become the driving force behind organizations. Moreover, it cannot be denied that communication plays a role in the realization of learning organizations and strategic planning. In a study by Barker and Camarata (1998),[23]the authors noted that there are theories that could explain the invaluable role of communication, and these are from Rational Choice Theory to Social Exchange Theory where costs, rewards, and outcomes are valued in maintaining communication and thus relationships to serve the ends of an organization and its members. Thus, while many organizations and companies try their best to become learning organizations and exercise strategic planning, without communication, relationships fail and the core disciplines are never truly met (Barker & Camarata, 1998).[23]
https://en.wikipedia.org/wiki/Strategic_planning
The followingoutlineis provided as an overview of, and topical guide to, machine learning: Machine learning(ML) is a subfield ofartificial intelligencewithincomputer sciencethat evolved from the study ofpattern recognitionandcomputational learning theory.[1]In 1959,Arthur Samueldefined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed".[2]ML involves the study and construction ofalgorithmsthat canlearnfrom and make predictions ondata.[3]These algorithms operate by building amodelfrom atraining setof example observations to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Dimensionality reduction Ensemble learning Meta-learning Reinforcement learning Supervised learning Bayesian statistics Decision tree algorithm Linear classifier Unsupervised learning Artificial neural network Association rule learning Hierarchical clustering Cluster analysis Anomaly detection Semi-supervised learning Deep learning History of machine learning Machine learning projects:
https://en.wikipedia.org/wiki/List_of_machine_learning_concepts
Incomputational complexity theory, agadgetis a subunit of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to constructreductionsfrom one computational problem to another, as part of proofs ofNP-completenessor other types of computational hardness. Thecomponent designtechnique is a method for constructing reductions by using gadgets.[1] Szabó (2009)traces the use of gadgets to a 1954 paper ingraph theorybyW. T. Tutte, in which Tutte provided gadgets for reducing the problem of finding asubgraphwith givendegreeconstraints to aperfect matchingproblem. However, the "gadget" terminology has a later origin, and does not appear in Tutte's paper.[2][3] Many NP-completeness proofs are based onmany-one reductionsfrom3-satisfiability, the problem of finding a satisfying assignment to a Boolean formula that is a conjunction (BooleanAND) of clauses, each clause being the disjunction (BooleanOR) of three terms, and each term being a Boolean variable or its negation. A reduction from this problem to a hard problem onundirected graphs, such as theHamiltonian cycleproblem orgraph coloring, would typically be based on gadgets in the form of subgraphs that simulate the behavior of the variables and clauses of a given 3-satisfiability instance. These gadgets would then be glued together to form a single graph, a hard instance for the graph problem in consideration.[4] For instance, the problem of testing 3-colorability of graphs may be proven NP-complete by a reduction from 3-satisfiability of this type. The reduction uses two special graph vertices, labeled as "Ground" and "False", that are not part of any gadget. As shown in the figure, the gadget for a variablexconsists of two vertices connected in a triangle with the ground vertex; one of the two vertices of the gadget is labeled withxand the other is labeled with the negation ofx. The gadget for a clause(t0∨t1∨t2)consists of six vertices, connected to each other, to the vertices representing the termst0,t1, andt2, and to the ground and false vertices by the edges shown. Any3-CNFformula may be converted into a graph by constructing a separate gadget for each of its variables and clauses and connecting them as shown.[5] In any 3-coloring of the resulting graph, one may designate the three colors as being true, false, or ground, where false and ground are the colors given to the false and ground vertices (necessarily different, as these vertices are made adjacent by the construction) and true is the remaining color not used by either of these vertices. Within a variable gadget, only two colorings are possible: the vertex labeled with the variable must be colored either true or false, and the vertex labeled with the variable's negation must correspondingly be colored either false or true. In this way, valid assignments of colors to the variable gadgets correspond one-for-one with truth assignments to the variables: the behavior of the gadget with respect to coloring simulates the behavior of a variable with respect to truth assignment. Each clause assignment has a valid 3-coloring if at least one of its adjacent term vertices is colored true, and cannot be 3-colored if all of its adjacent term vertices are colored false. In this way, the clause gadget can be colored if and only if the corresponding truth assignment satisfies the clause, so again the behavior of the gadget simulates the behavior of a clause. Agrawal et al. (1997)considered what they called "a radically simple form of gadget reduction", in which each bit describing part of a gadget may depend only on a bounded number of bits of the input, and used these reductions to prove an analogue of theBerman–Hartmanis conjecturestating that all NP-complete sets are polynomial-time isomorphic.[6] The standard definition of NP-completeness involvespolynomial timemany-one reductions: a problem in NP is by definition NP-complete if every other problem in NP has a reduction of this type to it, and the standard way of proving that a problem in NP is NP-complete is to find a polynomial time many-one reduction from a known NP-complete problem to it. But (in what Agrawal et al. called "a curious, often observed fact") all sets known to be NP-complete at that time could be proved complete using the stronger notion ofAC0many-one reductions: that is, reductions that can be computed by circuits of polynomial size, constant depth, and unbounded fan-in. Agrawal et al. proved that every set that is NP-complete under AC0reductions is complete under an even more restricted type of reduction,NC0many-one reductions, using circuits of polynomial size, constant depth, and bounded fan-in. In an NC0reduction, each output bit of the reduction can depend only on a constant number of input bits.[6] The Berman–Hartmanis conjecture is an unsolved problem in computational complexity theory stating that all NP-complete problem classes are polynomial-time isomorphic. That is, ifAandBare two NP-complete problem classes, there is a polynomial-time one-to-one reduction fromAtoBwhose inverse is also computable in polynomial time. Agrawal et al. used their equivalence between AC0reductions and NC0reductions to show that all sets complete for NP under AC0reductions are AC0-isomorphic.[6] One application of gadgets is in provinghardness of approximationresults, by reducing a problem that is known to be hard to approximate to another problem whose hardness is to be proven. In this application, one typically has a family of instances of the first problem in which there is a gap in the objective function values, and in which it is hard to determine whether a given instance has an objective function that is on the low side or on the high side of the gap. The reductions used in these proofs, and the gadgets used in the reductions, must preserve the existence of this gap, and the strength of the inapproximability result derived from the reduction will depend on how well the gap is preserved. Trevisan et al. (2000)formalize the problem of finding gap-preserving gadgets, for families ofconstraint satisfaction problemsin which the goal is to maximize the number of satisfied constraints.[7]They give as an example a reduction from3-satisfiabilityto2-satisfiabilitybyGarey, Johnson & Stockmeyer (1976), in which the gadget representing a 3-SAT clause consists of ten 2-SAT clauses, and in which a truth assignment that satisfies 3-SAT clause also satisfies at least seven clauses in the gadget, while a truth assignment that fails to satisfy a 3-SAT clause also fails to satisfy more than six clauses of the gadget.[8]Using this gadget, and the fact that (unlessP = NP) there is nopolynomial-time approximation schemefor maximizing the number of 3-SAT clauses that a truth assignment satisfies, it can be shown that there is similarly no approximation scheme for MAX 2-SAT. Trevisan et al. show that, in many cases of the constraint satisfaction problems they study, the gadgets leading to the strongest possible inapproximability results may be constructed automatically, as the solution to alinear programmingproblem. The same gadget-based reductions may also be used in the other direction, to transfer approximation algorithms from easier problems to harder problems. For instance, Trevisan et al. provide an optimal gadget for reducing 3-SAT to a weighted variant of 2-SAT (consisting of seven weighted 2-SAT clauses) that is stronger than the one byGarey, Johnson & Stockmeyer (1976); using it, together with knownsemidefinite programmingapproximation algorithms for MAX 2-SAT, they provide an approximation algorithm for MAX 3-SAT with approximation ratio 0.801, better than previously known algorithms.
https://en.wikipedia.org/wiki/Gadget_(computer_science)
Exploratory testingis an approach tosoftware testingthat is concisely described as simultaneous learning,test designand test execution.Cem Kaner, who coined the term in 1984,[1]defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."[2] While the software is being tested, the tester learns things that together with experience andcreativitygenerates new good tests to run. Exploratory testing is often thought of as ablack box testingtechnique. Instead, those who have studied it consider it a testapproachthat can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time.[3] Exploratory testing has always been performed by skilled testers. In the early 1990s,ad hocwas too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves theContext-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published byCem Kanerin his bookTesting Computer Software[4]and expanded upon inLessons Learned in Software Testing.[5]Exploratory testing can be as disciplined as any other intellectual activity. Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventingtest casesand findingdefects. The more the tester knows about the product and differenttest methods, the better the testing will be. To further explain, comparison can be made of freestyle exploratory testing to its antithesisscripted testing. In the latter activity test cases are designed in advance. This includes both the individual steps and the expected results. These tests are later performed by a tester who compares the actual result with the expected. When performing exploratory testing, expectations are open. Some results may be predicted and expected; others may not. The tester configures, operates, observes, and evaluates the product and its behaviour, critically investigating the result, and reporting information that seems likely to be a bug (which threatens the value of the product to some person) or an issue (which threatens the quality of the testing effort). In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency towards either one, depending on context. According to Kaner andJames Marcus Bach, exploratory testing is more amindsetor "...a way of thinking about testing" than a methodology.[6]They also say that it crosses a continuum from slightly exploratory (slightly ambiguous or vaguely scripted testing) to highly exploratory (freestyle exploratory testing).[7] The documentation of exploratory testing ranges from documenting all tests performed to just documenting thebugs. Duringpair testing, two persons create test cases together; one performs them, and the other documents.Session-based testingis a method specifically designed to make exploratory testing auditable and measurable on a wider scale. Exploratory testers often use tools, including screen capture or video tools as a record of the exploratory session, or tools to quickly help generate situations of interest, e.g. James Bach's Perlclip. The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than execution of scripted tests. Another major benefit is that testers can usedeductive reasoningbased on the results of previous results to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to exploring a more target rich environment. This also accelerates bug detection when used intelligently. Another benefit is that, after initial testing, most bugs are discovered by some sort of exploratory testing. This can be demonstrated logically by stating, "Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored." Disadvantages are that tests invented and performed on the fly can't be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run. Freestyle exploratory test ideas, when revisited, are unlikely to be performed in exactly the same manner, which can be an advantage if it is important to find new errors; or a disadvantage if it is more important to repeat specific details of the earlier tests. This can be controlled with specific instruction to the tester, or by preparing automated tests where feasible, appropriate, and necessary, and ideally as close to the unit level as possible. Replicated experiment has shown that while scripted and exploratory testing result in similar defect detection effectiveness (the total number of defects found) exploratory results in higher efficiency (the number of defects per time unit) as no effort is spent on pre-designing the test cases.[8]Observational study on exploratory testers proposed that the use of knowledge about the domain, the system under test, and customers is an important factor explaining the effectiveness of exploratory testing.[9]A case-study of three companies found that ability to provide rapid feedback was a benefit of Exploratory Testing while managing test coverage was pointed as a short-coming.[10]A survey found that Exploratory Testing is also used in critical domains and that Exploratory Testing approach places high demands on the person performing the testing.[11]
https://en.wikipedia.org/wiki/Exploratory_testing