text
stringlengths
16
172k
source
stringlengths
32
122
This is a list of topics aroundBoolean algebraandpropositional logic.
https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics
Anexistential graphis a type ofdiagrammaticor visual notation for logical expressions, created byCharles Sanders Peirce, who wrote on graphical logic as early as 1882,[1]and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems. Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic,[2]which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them[3]– in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations.[4] Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way;[5]and, for two, that the linear formula notation in the hands of experts is actually the less complex tool.[6]Hence, the existential graphs received little attention[7]or were seen as unwieldy.[8]From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic,[9]which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones.[10] The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished. As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs).[11] Peirce proposed three systems of existential graphs: Alphanests inbetaandgamma.Betadoes not nest ingamma, quantified modal logic being more general than put forth by Peirce. Thesyntaxis: Any well-formed part of a graph is asubgraph. Thesemanticsare: Hence thealphagraphs are a minimalist notation forsentential logic, grounded in the expressive adequacy ofAndandNot. Thealphagraphs constitute a radical simplification of thetwo-element Boolean algebraand thetruth functors. Thedepthof an object is the number of cuts that enclose it. Rules of inference: Rules of equivalence: A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called atautology(or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of thesatisfiableformulasoffirst-order logic. In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces. The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:" The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitlyexistentially(universally) quantified. Zeman (1964) was the first to note that thebetagraphs areisomorphictofirst-order logicwithequality(also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition ofDavid HilbertandWilhelm Ackermann'sPrinciples of Mathematical Logic. Add to the syntax ofalphaa second kind ofsimple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitiveunary operatorofmodal logic. Zeman (1964) was the first to note that thegammagraphs are equivalent to the well-knownmodal logics S4andS5. Hence thegammagraphs can be read as a peculiar form ofnormal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest. The existential graphs are a curious offspring ofPeircethelogician/mathematician with Peirce the founder of a major strand ofsemiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885American Journal of Mathematics, Peirce developed much of thetwo-element Boolean algebra,propositional calculus,quantificationand thepredicate calculus, and some rudimentaryset theory.Model theoristsconsider Peirce the first of their kind. He also extendedDe Morgan'srelation algebra. He stopped short ofmetalogic(which eluded evenPrincipia Mathematica). But Peirce's evolvingsemiotictheory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyondEuler's diagramsandVenn's 1880revisionthereof.Frege's 1879 workBegriffsschriftalso employed a two-dimensional notation for logic, but one very different from Peirce's. Peirce's first published paper on graphical logic (reprinted in Vol. 3 of hisCollected Papers) proposed a system dual (in effect) to thealphaexistential graphs, called theentitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911Victoria, Lady Welbyshowed the existential graphs toC. K. Ogdenwho felt they could usefully be combined with Welby's thoughts in a "less abstruse form."[12]Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964). Currently, the chronological critical edition of Peirce's works, theWritings, extends only to 1892. Much of Peirce's work onlogical graphsconsists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear.
https://en.wikipedia.org/wiki/Logical_graph
Inlogicandmathematics, atruth value, sometimes called alogical value, is a value indicating the relation of apropositiontotruth, which inclassical logichas only two possible values (trueorfalse).[1][2]Truth values are used incomputingas well as various types oflogic. In some programming languages, anyexpressioncan be evaluated in a context that expects aBoolean data type. Typically (though this varies by programming language) expressions like the numberzero, theempty string, empty lists, andnullare treated as false, and strings with content (like "abc"), other numbers, and objects evaluate to true. Sometimes these classes of expressions are calledfalsyandtruthy. For example, inLisp,nil, the empty list, is treated as false, and all other values are treated as true. InC, the number 0 or 0.0 is false, and all other values are treated as true. InJavaScript, the empty string (""),null,undefined,NaN, +0,−0andfalse[3]are sometimes calledfalsy(of which thecomplementistruthy) to distinguish between strictlytype-checkedandcoercedBooleans (see also:JavaScript syntax#Type conversion).[4]As opposed to Python, empty containers (Arrays, Maps, Sets) are considered truthy. Languages such asPHPalso use this approach. Inclassical logic, with its intended semantics, the truth values aretrue(denoted by1or theverum⊤), anduntrueorfalse(denoted by0or thefalsum⊥); that is, classical logic is atwo-valued logic. This set of two values is also called theBoolean domain. Corresponding semantics oflogical connectivesaretruth functions, whose values are expressed in the form oftruth tables.Logical biconditionalbecomes theequalitybinary relation, andnegationbecomes abijectionwhichpermutestrue and false. Conjunction and disjunction aredualwith respect to negation, which is expressed byDe Morgan's laws: Propositional variablesbecomevariablesin the Boolean domain. Assigning values for propositional variables is referred to asvaluation. Whereas in classical logic truth values form aBoolean algebra, inintuitionistic logic, and more generally,constructive mathematics, the truth values form aHeyting algebra. Such truth values may express various aspects of validity, including locality, temporality, or computational content. For example, one may use theopen setsof a topological space as intuitionistic truth values, in which case the truth value of a formula expresseswherethe formula holds, not whether it holds. Inrealizabilitytruth values are sets of programs, which can be understood as computational evidence of validity of a formula. For example, the truth value of the statement "for every number there is a prime larger than it" is the set of all programs that take as input a numbern{\displaystyle n}, and output a prime larger thann{\displaystyle n}. Incategory theory, truth values appear as the elements of thesubobject classifier. In particular, in atoposevery formula ofhigher-order logicmay be assigned a truth value in the subobject classifier. Even though a Heyting algebra may have many elements, this should not be understood as there being truth values that are neither true nor false, because intuitionistic logic proves¬(p≠⊤∧p≠⊥){\displaystyle \neg (p\neq \top \land p\neq \bot )}("it is not the case thatp{\displaystyle p}is neither true nor false").[5] Inintuitionistic type theory, theCurry-Howard correspondenceexhibits an equivalence of propositions and types, according to which validity is equivalent to inhabitation of a type. For other notions of intuitionistic truth values, see theBrouwer–Heyting–Kolmogorov interpretationandIntuitionistic logic § Semantics. Multi-valued logics(such asfuzzy logicandrelevance logic) allow for more than two truth values, possibly containing some internal structure. For example, on theunit interval[0,1]such structure is atotal order; this may be expressed as the existence of variousdegrees of truth. Not alllogical systemsare truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example,intuitionistic logiclacks a complete set of truth values because its semantics, theBrouwer–Heyting–Kolmogorov interpretation, is specified in terms ofprovabilityconditions, and not directly in terms of thenecessary truthof formulae. But even non-truth-valuational logics can associate values with logical formulae, as is done inalgebraic semantics. The algebraic semantics of intuitionistic logic is given in terms ofHeyting algebras, compared toBoolean algebrasemantics of classical propositional calculus.
https://en.wikipedia.org/wiki/Logical_value
In themathematicalstudy ofcellular automata,Rule 90is anelementary cellular automatonbased on theexclusive orfunction. It consists of a one-dimensional array of cells, each of which can hold either a 0 or a 1 value. In each time step all values are simultaneously replaced by theXORof their two neighboring values.[1]Martin, Odlyzko & Wolfram (1984)call it "the simplest non-trivial cellular automaton",[2]and it is described extensively inStephen Wolfram's 2002 bookA New Kind of Science.[3] When started from a single live cell, Rule 90 has a time-space diagram in the form of aSierpiński triangle. The behavior of any other configuration can be explained as a superposition of copies of this pattern, combined using theexclusive orfunction. Any configuration with only finitely many nonzero cells becomes areplicatorthat eventually fills the array with copies of itself. When Rule 90 is started from arandominitial configuration, its configuration remains random at each time step. Its time-space diagram forms many triangular "windows" of different sizes, patterns that form when a consecutive row of cells becomes simultaneously zero and then cells with value 1 gradually move into this row from both ends. Some of the earliest studies of Rule 90 were made in connection with an unsolved problem innumber theory,Gilbreath's conjecture, on the differences of consecutiveprime numbers. This rule is also connected to number theory in a different way, viaGould's sequence. This sequence counts the number of nonzero cells in each time step after starting Rule 90 with a single live cell. Its values arepowers of two, with exponents equal to the number of nonzero digits in thebinary representationof the step number. Other applications of Rule 90 have included the design oftapestries. Every configuration of Rule 90 has exactly four predecessors, other configurations that form the given configuration after a single step. Therefore, in contrast to many other cellular automata such asConway's Game of Life, Rule 90 has noGarden of Eden, a configuration with no predecessors. It provides an example of a cellular automaton that issurjective(each configuration has a predecessor) but notinjective(it has sets of more than one configuration with the same successor). It follows from theGarden of Eden theoremthat Rule 90 is locally injective (all configurations with the same successor vary at an infinite number of cells). Rule 90 is anelementary cellular automaton. That means that it consists of a one-dimensional array of cells, each of which holds a single binary value, either 0 or 1. An assignment of values to all of the cells is called aconfiguration. The automaton is given an initial configuration, and then progresses through other configurations in a sequence of discrete time steps. At each step, all cells are updated simultaneously. A pre-specified rule determines the new value of each cell as a function of its previous value and of the values in its two neighboring cells. All cells obey the same rule, which may be given either as a formula or as a rule table that specifies the new value for each possible combination of neighboring values.[1] In the case of Rule 90, each cell's new value is theexclusive orof the two neighboring values. Equivalently, the next state of this particular automaton is governed by the following rule table:[1] The name of Rule 90 comes fromStephen Wolfram'sbinary-decimal notationfor one-dimensional cellular automaton rules. To calculate the notation for the rule, concatenate the new states in the rule table into a singlebinary number, and convert the number intodecimal: 010110102= 9010.[1]Rule 90 has also been called theSierpiński automaton, due to the characteristicSierpiński triangleshape it generates,[4]and theMartin–Odlyzko–Wolfram cellular automatonafter the early research of Olivier Martin,Andrew M. Odlyzko, andStephen Wolfram(1984) on this automaton.[5] A configuration in Rule 90 can be partitioned into two subsets of cells that do not interact with each other. One of these two subsets consists of the cells in even positions at even time steps and the cells in odd positions in odd time steps. The other subset consists of the cells in even positions at odd time steps and the cells in odd positions at even time steps. Each of these two subsets can be viewed as a cellular automaton with only its half of the cells.[6]The rule for the automaton within each of these subsets is equivalent (except for a shift by half a cell per time step) to anotherelementary cellular automaton,Rule 102, in which the new state of each cell is the exclusive or of its old state and its right neighbor. That is, the behavior of Rule 90 is essentially the same as the behavior of two interleaved copies of Rule 102.[7] Rule 90 and Rule 102 are calledadditive cellular automata. This means that, if two initial states are combined by computing the exclusive or of each their states, then their subsequent configurations will be combined in the same way. More generally, one can partition any configuration of Rule 90 into two subsets with disjoint nonzero cells, evolve the two subsets separately, and compute each successive configuration of the original automaton as the exclusive or of the configurations on the same time steps of the two subsets.[2] The Rule 90 automaton (in its equivalent form on one of the two independent subsets of alternating cells) was investigated in the early 1970s, in an attempt to gain additional insight intoGilbreath's conjectureon the differences of consecutiveprime numbers. In the triangle of numbers generated from the primes by repeatedly applying theforward difference operator, it appears that most values are either 0 or 2. In particular, Gilbreath's conjecture asserts that the leftmost values in each row of this triangle are all 0 or 2. When a contiguous subsequence of values in one row of the triangle are all 0 or 2, then Rule 90 can be used to determine the corresponding subsequence in the next row.Miller (1970)explained the rule by a metaphor of tree growth in a forest, entitling his paper on the subject "Periodic forests of stunted trees". In this metaphor, a tree begins growing at each position of the initial configuration whose value is 1, and this forest of trees then grows simultaneously, to a new height above the ground at each time step. Each nonzero cell at each time step represents a position that is occupied by a growing tree branch. At each successive time step, a branch can grow into one of the two cells above it to its left and right only when there is no other branch competing for the same cell. A forest of trees growing according to these rules has exactly the same behavior as Rule 90.[8] From any initial configuration of Rule 90, one may form amathematical forest, adirected acyclic graphin which everyvertexhas at most one outgoing edge, whose trees are the same as the trees in Miller's metaphor. The forest has a vertex for each pair(x,i)such that cellxis nonzero at timei. The vertices at time 0 have no outgoing edges; each one forms the root of a tree in the forest. For each vertex(x,i)withinonzero, its outgoing edge goes to(x± 1,i− 1), the unique nonzero neighbor ofxin time stepi− 1. Miller observed that these forests develop triangular "clearings", regions of the time-space diagram with no nonzero cells bounded by a flat bottom edge and diagonal sides. Such a clearing is formed when a consecutive sequence of cells becomes zero simultaneously in one time step, and then (in the tree metaphor) branches grow inwards, eventually re-covering the cells of the sequence.[8] For random initial conditions, the boundaries between the trees formed in this way themselves shift in a seemingly random pattern, and trees frequently die out altogether. But by means of the theory ofshift registershe and others were able to find initial conditions in which the trees all remain alive forever, the pattern of growth repeats periodically, and all of the clearings can be guaranteed to remain bounded in size.[8][9]Miller used these repeating patterns to form the designs oftapestries. Some of Miller's tapestries depict physical trees; others visualize the Rule 90 automaton using abstract patterns of triangles.[8] The time-space diagram of Rule 90 is a plot in which theith row records the configuration of the automaton at stepi. When the initial state has a single nonzero cell, this diagram has the appearance of theSierpiński triangle, afractalformed by combiningtrianglesinto larger triangles. Rules 18, 22, 26, 82, 146, 154, 210 and 218 also generate Sierpinski triangles from a single cell, however not all of these are created completely identically. One way to explain this structure uses the fact that, in Rule 90, each cell is theexclusive orof its two neighbors. Because this is equivalent tomodulo-2 addition, this generates the modulo-2 version ofPascal's triangle. The diagram has a 1 wherever Pascal's triangle has anodd number, and a 0 wherever Pascal's triangle has aneven number. This is a discrete version of the Sierpiński triangle.[1][10] The number of live cells in each row of this pattern is apower of two. In theith row, it equals2k, wherekis the number of nonzero digits in thebinary representationof the numberi. The sequence of these numbers of live cells, is known asGould's sequence. The single live cell of the starting configuration is asawtooth pattern. This means that in some time steps the numbers of live cells grow arbitrarily large while in others they return to only two live cells, infinitely often. The growth rate of this pattern has a characteristic growingsawtooth waveshape that can be used to recognize physical processes that behave similarly to Rule 90.[4] The Sierpiński triangle also occurs in a more subtle way in the evolution of any configuration in Rule 90. At any time stepiin the Rule's evolution, the state of any cell can be calculated as the exclusive or of a subset of the cells in the initial configuration. That subset has the same shape as theith row of the Sierpiński triangle.[11] In the Sierpiński triangle, for any integeri, the rows numbered by multiples of2ihave nonzero cells spaced at least2iunits apart. Therefore, because of the additive property of Rule 90, if an initial configuration consists of a finite patternPof nonzero cells with width less than2i, then in steps that are multiples of2i, the configuration will consist of copies ofPspaced at least2iunits from start to start. This spacing is wide enough to prevent the copies from interfering with each other. The number of copies is the same as the number of nonzero cells in the corresponding row of the Sierpiński triangle. Thus, in this rule, every pattern is areplicator: it generates multiple copies of itself that spread out across the configuration, eventually filling the whole array. Other rules including theVon Neumann universal constructor,Codd's cellular automaton, andLangton's loopsalso have replicators that work by carrying and copying a sequence of instructions for building themselves. In contrast, the replication in Rule 90 is trivial and automatic.[12] In Rule 90, on an infinite one-dimensional lattice, every configuration has exactly four predecessor configurations. This is because, in a predecessor, any two consecutive cells may have any combination of states, but once those two cells' states are chosen, there is only one consistent choice for the states of the remaining cells. Therefore, there is noGarden of Edenin Rule 90, a configuration with no predecessors. The Rule 90 configuration consisting of a single nonzero cell (with all other cells zero) has no predecessors that have finitely many nonzeros. However, this configuration is not a Garden of Eden because it does have predecessors with infinitely many nonzeros.[13] The fact that every configuration has a predecessor may be summarized by saying that Rule 90 issurjective. The function that maps each configuration to its successor is, mathematically, a surjective function. Rule 90 is also notinjective. In an injective rule, every two different configurations have different successors, but Rule 90 has pairs of configurations with the same successor. Rule 90 provides an example of a cellular automaton that is surjective but not injective. TheGarden of Eden theoremof Moore and Myhill implies that every injective cellular automaton must be surjective, but this example shows that the converse is not true.[13][14] Because each configuration has only a bounded number of predecessors, the evolution of Rule 90 preserves theentropyof any configuration. In particular, if an infinite initial configuration is selected by choosing the state of each cell independently at random, with each of the two states being equally likely to be selected, then each subsequent configuration can be described by exactly the same probability distribution.[2] Many other cellular automata and other computational systems are capable of emulating the behavior of Rule 90. For instance, a configuration in rule 90 may be translated into a configuration into the different elementary cellular automaton Rule 22. The translation replaces each Rule 90 cell by three consecutive Rule 22 cells. These cells are all zero if the Rule 90 cell is itself zero. A nonzero Rule 90 cell is translated into a one followed by two zeros. With this transformation, every six steps of the Rule 22 automaton simulate a single step of the Rule 90 automaton. Similar direct simulations of Rule 90 are also possible for the elementary cellular automata Rule 45 and Rule 126, for certainstring rewriting systemsandtag systems, and in some two-dimensional cellular automata includingWireworld. Rule 90 can also simulate itself in the same way. If each cell of a Rule 90 configuration is replaced by a pair of consecutive cells, the first containing the original cell's value and the second containing zero, then this doubled configuration has the same behavior as the original configuration at half the speed.[15] Various other cellular automata are known to support replicators, patterns that make copies of themselves, and most share the same behavior as in the tree growth model for Rule 90. A new copy is placed to either side of the replicator pattern, as long as the space there is empty. However, if two replicators both attempt to copy themselves into the same position, then the space remains blank. In either case the replicators themselves vanish, leaving their copies to carry on the replication. A standard example of this behavior is the "bowtie pasta" pattern in the two-dimensionalHighLiferule. This rule behaves in many ways like Conway's Game of Life, but such a small replicator does not exist in Life. Whenever an automaton supports replicators with the same growth pattern, one-dimensional arrays of replicators can be used to simulate Rule 90.[16]Rule 90 (on finite rows of cells) can also be simulated by the blockoscillatorsof the two-dimensionalLife-like cellular automatonB36/S125, also called "2x2", and the behavior of Rule 90 can be used to characterize the possible periods of these oscillators.[17]
https://en.wikipedia.org/wiki/Rule_90
AnXOR linked listis a type ofdata structureused incomputer programming. It takes advantage of thebitwise XORoperation to decrease storage requirements fordoubly linked listsby storing the composition of both addresses in one field. While the composed address is not meaningful on its own, during traversal it can be combined with knowledge of the last-visited node address to deduce the address of the following node. An ordinary doubly linked list stores addresses of the previous and next list items in each list node, requiring two address fields: An XOR linked list compresses the same information intooneaddress field by storing the bitwise XOR (here denoted by ⊕) of the address forpreviousand the address fornextin one field: More formally: When traversing the list from left to right: supposing the cursor is at C, the previous item, B, may be XORed with the value in the link field (B⊕D). The address for D will then be obtained and list traversal may resume. The same pattern applies in the other direction. i.e.addr(D) = link(C) ⊕ addr(B)where so since since The XOR operation cancelsaddr(B)appearing twice in the equation and all we are left with is theaddr(D). To start traversing the list in either direction from some point, the address of two consecutive items is required. If the addresses of the two consecutive items are reversed, list traversal will occur in the opposite direction.[1] The key is the first operation, and the properties of XOR: The R2 register always contains the XOR of the address of current item C with the address of the predecessor item P: C⊕P. The Link fields in the records contain the XOR of the left and right successor addresses, say L⊕R. XOR of R2 (C⊕P) with the current link field (L⊕R) yields C⊕P⊕L⊕R. In each case, the result is the XOR of the current address with the next address. XOR of this with the current address in R1 leaves the next address. R2 is left with the requisite XOR pair of the (now) current address and the predecessor. Computer systems have increasingly cheap and plentiful memory, therefore storage overhead is not generally an overriding issue outside specializedembedded systems. Where it is still desirable to reduce the overhead of a linked list,unrollingprovides a more practical approach (as well as other advantages, such as increasing cache performance and speedingrandom access). The underlying principle of the XOR linked list can be applied to any reversible binary operation. Replacing XOR by addition or subtraction gives slightly different, but largely equivalent, formulations: This kind of list has exactly the same properties as the XOR linked list, except that a zero link field is not a "mirror". The address of the next node in the list is given by subtracting the previous node's address from the current node's link field. This kind of list differs from the standard "traditional" XOR linked list in that the instruction sequences needed to traverse the list forwards is different from the sequence needed to traverse the list in reverse. The address of the next node, going forwards, is given byaddingthe link field to the previous node's address; the address of the preceding node is given bysubtractingthe link field from the next node's address. The subtraction linked list is also special in that the entire list can be relocated in memory without needing any patching of pointer values, since adding a constant offset to each address in the list will not require any changes to the values stored in the link fields. (See alsoserialization.) This is an advantage over both XOR linked lists and traditional linked lists. The XOR linked list concept can be generalized to XORbinary search trees.[3]
https://en.wikipedia.org/wiki/XOR_linked_list
Roger Frontenac(fl.1950) was aFrench navyofficer and a scholar ofNostradamus' prophecies. He proposed an interpretation system for the text ofLes Prophéties, based upon a form ofcryptographyknown as theVigenère table. Roger Frontenac, as a navy officer, was in charge ofmilitary ciphers. After World War II, he began to study the work of Nostradamus, treating it as any other message from an enemy. He searched for any hint about decoding methods. The name of Nostradamus' son Cesar led Frontenac to suspect the use of aCaesar cipher.[citation needed] He published his treatise about Nostradamus' letters and works,La clef secrète de Nostradamus('The Secret Key of Nostradamus'), in 1950. In the book, Frontenac professed his belief in Nostradamus as a trueprophet, who made correct foretellings, and that the centuries (French:Les Prophéties) contained true predictions about future events until the year 3797. However, Frontenac contended that those predictions were hidden, mixed, and not understandable before the events occurred. His conclusions were based on a combination of several cryptographic methods, including a systematic alteration in the metrical order ofquatrains' texts. This process was inspired by Nostradamus' use of the expressionrabouter obscurément('to mix in order to make them obscure') in a letter.[1] The systematic reordering of quatrains, according to Frontenac, could be achieved using a couple of combinedkeys, and he stated that he managed to find the first key (a typical Vigenère text, easy to hold in memory), that was the Latin phrase: Flamen fidele coegi id vulgo a Kabaloopplevi in viva acta tam latenter densaex HDMP fata hac cult sunt ob gratiaefidos Nostradamus fas obturavit a saxo Loyal and inspired by the flame (of the Flamini priests),I conceived and gathered what ordinary people callKabbalò.I had hidden it, in living documents (Magical Actas), that are extremely condensed.The facts of destiny are in this way obscured, using the "HDMP" [perhaps the number 841216.[citation needed]]For those who believe inDivine Grace, Nostradamus has enclosed it in (or behind) a stone.
https://en.wikipedia.org/wiki/Roger_Frontenac
Michel de Nostredame(December 1503 – July 1566[1]), usuallyLatinisedasNostradamus,[a]was a Frenchastrologer,apothecary,physician, and reputedseer, who is best known for his bookLes Prophéties(published in 1555), a collection of 942[b]poeticquatrainsallegedly predicting future events. Nostradamus's father's family had originally been Jewish, but had converted toCatholic Christianitya generation before Nostradamus was born. He studied at theUniversity of Avignon, but was forced to leave after just over a year when the university closed due to an outbreak of theplague. He worked as anapothecaryfor several years before entering theUniversity of Montpellier, hoping to earn a doctorate, but was almost immediately expelled after his work as an apothecary (a manual trade forbidden by university statutes) was discovered. He first married in 1531, but his wife and two children died in 1534 during another plague outbreak. He worked against the plague alongside other doctors before remarrying to Anne Ponsarde, with whom he had six children. He wrote analmanacfor 1550 and, as a result of its success, continued writing them for future years as he began working as anastrologerfor various wealthy patrons.Catherine de' Medicibecame one of his foremost supporters. HisLes Prophéties, published in 1555, relied heavily on historical and literaryprecedent, and initially received mixed reception. He suffered from severegouttoward the end of his life, which eventually developed intoedema. He died on 1 or 2 July 1566. Many popular authors have retoldapocryphallegends about his life. In the years since the publication of hisLes Prophéties, Nostradamus has attracted many supporters, who, along with some of the popular press, credit him with having accurately predicted many major world events.[7][8]Academic sources reject the notion that Nostradamus had any genuine supernatural prophetic abilities and maintain that the associations made between world events and Nostradamus's quatrains are the result of (sometimes deliberate) misinterpretations or mistranslations.[9]These academics also argue that Nostradamus's predictions are characteristically vague, meaning they could be applied to virtually anything, and are useless for determining whether their author had any real prophetic powers. Nostradamus was born on either 14 or 21 December 1503 inSaint-Rémy-de-Provence,Provence, France,[10]where his claimed birthplace still exists, and baptized Michel.[10]He was one of at least nine children of notary Jaume (or Jacques) de Nostredame and Reynière, granddaughter of Pierre de Saint-Rémy who worked as a physician in Saint-Rémy.[10]Jaume's family had originally beenJewish, but his father, Cresquas, a grain and money dealer based inAvignon, had converted to Catholicism around 1459–60, taking the Christian name "Pierre" and the surname "Nostredame" (Our Lady), the saint on whose day his conversion was solemnised.[10]The earliest ancestor who can be identified on the paternal side is Astruge ofCarcassonne, who died about 1420. Michel's known siblings included Delphine, Jean (c. 1507–1577), Pierre, Hector, Louis, Bertrand,Jean II(born 1522) and Antoine (born 1523).[11][12][13]Little else is known about his childhood, although there is a persistent tradition that he was educated by his maternal great-grandfather Jean de St. Rémy[14]—a tradition which is somewhat undermined by the fact that the latter disappears from the historical record after 1504 when the child was only one year old.[15] At the age of 14,[7]Nostradamus entered theUniversity of Avignonto study for hisbaccalaureate. After little more than a year (when he would have studied the regulartriviumofgrammar,rhetoricandlogicrather than the more advancedquadriviumofgeometry,arithmetic,music, andastronomy/astrology), he was forced to leave Avignon when the university closed its doors during an outbreak of the plague. After leaving Avignon, Nostradamus, by his own account, traveled the countryside for eight years from 1521 researching herbal remedies. In 1529, after some years as anapothecary, he entered theUniversity of Montpellierto study for a doctorate in medicine. He was expelled shortly afterwards by the studentprocurator,Guillaume Rondelet, when it was discovered that he had been an apothecary, a "manual trade" expressly banned by the university statutes, and had been slandering doctors.[16]The expulsion document,BIU Montpellier, Register S 2 folio 87, still exists in the faculty library.[17]Some of his publishers and correspondents would later call him "Doctor". After his expulsion, Nostradamus continued working, presumably still as an apothecary, and became famous for creating a "rose pill" that purportedly protected against the plague.[18] In 1531 Nostradamus was invited byJules-César Scaliger, a leadingRenaissance scholar, to come toAgen.[19]There he married a woman of uncertain name (possibly Henriette d'Encausse), with whom he had two children.[20]In 1534 his wife and children died, presumably from the plague. After their deaths, he continued to travel, passing through France and possibly Italy.[21] On his return in 1545, he assisted the prominent physicianLouis Serrein his fight against a major plague outbreak inMarseille, and then tackled further outbreaks of disease on his own in Salon-de-Provence and in the regional capital,Aix-en-Provence. Finally, in 1547, he settled inSalon-de-Provencein the house which exists today, where he married a rich widow named Anne Ponsarde, with whom he had six children—three daughters and three sons.[22]Between 1556 and 1567 he and his wife acquired a one-thirteenth share in a huge canal project, organised byAdam de Craponne, to create theCanal de Craponneto irrigate the largely waterless Salon-de-Provence and the nearby Désert de laCraufrom the riverDurance.[23] After another visit to Italy, Nostradamus began to move away from medicine and toward the "occult". Following popular trends, he wrote analmanacfor 1550, for the first time in print Latinising his name to Nostradamus. He was so encouraged by the almanac's success that he decided to write one or more annually. Taken together, they are known to have contained at least 6,338 prophecies,[24][25]as well as at least eleven annual calendars, all of them starting on 1 January and not, as is sometimes supposed, in March. It was mainly in response to the almanacs that the nobility and other prominent people from far away soon started asking for horoscopes and "psychic" advice from him, though he generally expected his clients to supply the birth charts on which these would be based, rather than calculating them himself as a professional astrologer would have done. When obliged to attempt this himself on the basis of the published tables of the day, he frequently made errors and failed to adjust the figures for his clients' place or time of birth.[26][27][c][28] He then began his project of writing a book of one thousand mainly French quatrains, which constitute the largely undated prophecies for which he is most famous today. Feeling vulnerable to opposition on religious grounds,[29]he devised a method of obscuring his meaning by using "Virgilianised" syntax, word games and a mixture of other languages such asGreek, Italian,Latin, andProvençal.[30]For technical reasons connected with their publication in three instalments (the publisher of the third and last instalment seems to have been unwilling to start it in the middle of a "Century," or book of 100 verses), the last fifty-eight quatrains of the seventh "Century" have not survived in any extant edition. The quatrains, published in a book titledLes Prophéties(The Prophecies), received a mixed reaction when they were published. Some people thought Nostradamus was a servant of evil, a fake, or insane, while many of the elite evidently thought otherwise.Catherine de' Medici, wife of KingHenry II of France, was one of Nostradamus's greatest admirers. After reading his almanacs for 1555, which hinted at unnamed threats to the royal family, she summoned him to Paris to explain them and to draw up horoscopes for her children. At the time, he feared that he would be beheaded,[31]but by the time of his death in 1566, Queen Catherine had made him Counselor and Physician-in-Ordinary to her son, the young KingCharles IX of France. Some accounts of Nostradamus's life state that he was afraid of being persecuted forheresyby theInquisition, but neitherprophecynorastrologyfell in this bracket, and he would have been in danger only if he had practisedmagicto support them. In 1538 he came into conflict with the Church in Agen after an Inquisitor visited the area looking foranti-Catholicviews.[32]His brief imprisonment at Marignane in late 1561 was because he had violated a recent royal decree by publishing his 1562 almanac without the prior permission of a bishop.[33] By 1566, Nostradamus'gout, which had plagued him painfully for many years and made movement very difficult, turned intoedema. In late June he summoned his lawyer to draw up an extensive will bequeathing his property plus 3,444 crowns (around US$300,000 today), minus a few debts, to his wife pending her remarriage, in trust for her sons pending their twenty-fifth birthdays and her daughters pending their marriages. This was followed by a much shortercodicil.[34]On the evening of 1 July, he is alleged to have told his secretary Jean de Chavigny, "You will not find me alive at sunrise." The next morning he was reportedly found dead, lying on the floor next to his bed and a bench (Presage 141 [originally 152]for November 1567, as posthumously edited by Chavigny to fit what happened).[35][25]He was buried in the local Franciscan chapel in Salon (part of it now incorporated into the restaurantLa Brocherie) but re-interred during theFrench Revolutionin the Collégiale Saint-Laurent, where his tomb remains to this day.[36] InThe PropheciesNostradamus compiled his collection of major, long-term predictions. The first installment was published in 1555 and contained 353quatrains. The third edition, with three hundred new quatrains, was reportedly printed in 1558, but now survives as only part of the omnibus edition that was published after his death in 1568. This version contains one unrhymed and 941 rhymed quatrains, grouped into nine sets of 100 and one of 42, called "Centuries". Given printing practices at the time (which included type-setting from dictation), no two editions turned out to be identical, and it is relatively rare to find even two copies that are exactly the same. Certainly there is no warrant for assuming—as would-be "code-breakers" are prone to do—that either the spellings or the punctuation of any edition are Nostradamus's originals.[6] TheAlmanacs, by far the most popular of his works,[37]were published annually from 1550 until his death. He often published two or three in a year, entitled eitherAlmanachs(detailed predictions),PrognosticationsorPresages(more generalised predictions). Nostradamus was not only adiviner, but a professional healer. It is known that he wrote at least two books on medical science. One was an extremely free translation (or rather a paraphrase) ofThe ProtrepticofGalen(Paraphrase de C. GALIEN, sus l'Exhortation de Menodote aux estudes des bonnes Artz, mesmement Medicine), and in his so-calledTraité des fardemens(basically a medical cookbook containing, once again, materials borrowed mainly from others), he included a description of the methods he used to treat the plague, including bloodletting, none of which apparently worked.[38]The same book also describes the preparation of cosmetics. A manuscript normally known as theOrus Apolloalso exists in theLyonmunicipal library, where upwards of 2,000 original documents relating to Nostradamus are stored under the aegis of Michel Chomarat. It is a purported translation of an ancient Greek work onEgyptian hieroglyphsbased on later Latin versions, all of them unfortunately ignorant of the true meanings of the ancient Egyptian script, which was not correctly deciphered untilChampollionin the 19th century.[39] Since his death, only theProphecieshave continued to be popular, but in this case they have been quite extraordinarily so. Over two hundred editions of them have appeared in that time, together with over 2,000 commentaries. Theirpersistence in popular cultureseems to be partly because their vagueness and lack of dating make it easy to quote them selectively after every major dramatic event and retrospectively claim them as "hits".[40] Nostradamus claimed to base his published predictions onjudicial astrology—the astrological 'judgment', or assessment, of the 'quality' (and thus potential) of events such as births, weddings, coronations etc.—but was heavily criticised by professional astrologers of the day such as Laurens Videl[42]for incompetence and for assuming that "comparative horoscopy" (the comparison of future planetary configurations with those accompanying known past events) could actually predict what would happen in the future.[43] Research suggests that much of his prophetic work paraphrases collections of ancientend-of-the-worldprophecies (mainly Bible-based), supplemented with references to historical events and anthologies ofomenreports, and then projects those into the future in part with the aid of comparative horoscopy. Hence the many predictions involving ancient figures such asSulla,Gaius Marius,Nero, and others, as well as his descriptions of "battles in the clouds" and "frogs falling from the sky".[44]Astrology itself is mentioned only twice in Nostradamus'sPrefaceand 41 times in theCenturiesthemselves, but more frequently in his dedicatoryLetter to King Henry II. In the last quatrain of his sixthcenturyhe specifically attacks astrologers. His historical sources include easily identifiable passages fromLivy,Suetonius'The Twelve Caesars,Plutarchand other classical historians, as well as from medieval chroniclers such asGeoffrey of VillehardouinandJean Froissart. Many of his astrological references are taken almost word for word fromRichard Roussat'sLivre de l'estat et mutations des tempsof 1549–50. One of his major prophetic sources was evidently theMirabilis Liberof 1522, which contained a range of prophecies byPseudo-Methodius, theTiburtine Sibyl,Joachim of Fiore,Savonarolaand others (hisPrefacecontains 24 biblical quotations, all but two in the order used by Savonarola). This book had enjoyed considerable success in the 1520s, when it went through half a dozen editions, but did not sustain its influence, perhaps owing to its mostly Latin text (mixed with ancient Greek and modern French and Provençal),[45]Gothic script and many difficult abbreviations. Nostradamus was one of the first to re-paraphrase these prophecies in French, which may explain why they are credited to him. Modern views of plagiarism did not apply in the 16th century; authors frequently copied and paraphrased passages without acknowledgement, especially from the classics. The latest research suggests that he may in fact have usedbibliomancyfor this—randomly selecting a book of history or prophecy and taking his cue from whatever page it happened to fall open at.[7] Further material was gleaned from theDe honesta disciplinaof 1504 byPetrus Crinitus,[46]which included extracts fromMichael Psellos'sDe daemonibus, and theDe Mysteriis Aegyptiorum(Concerning the mysteries of Egypt), a book onChaldeanandAssyrianmagic byIamblichus, a 4th-centuryNeo-Platonist. Latin versions of both had recently been published inLyon, and extracts from both are paraphrased (in the second case almost literally) in his first two verses, the first of which is appended to this article. While it is true that Nostradamus claimed in 1555 to have burned all of theoccultworks in his library, no one can say exactly what books were destroyed in this fire. Only in the 17th century did people start to notice his reliance on earlier, mainly classical sources.[d] Nostradamus's reliance on historical precedent is reflected in the fact that he explicitly rejected the label "prophet" (i.e. a person having prophetic powers of his own) on several occasions:[47] Although, my son, I have used the wordprophet, I would not attribute to myself a title of such lofty sublimity. Not that I would attribute to myself either the name or the role of a prophet. [S]ome of [the prophets] predicted great and marvelous things to come: [though] for me, I in no way attribute to myself such a title here. Not that I am foolish enough to claim to be a prophet. Given this reliance on literary sources, it is unlikely that Nostradamus used any particular methods for entering atrance state, other thancontemplation,meditationandincubation.[50]His sole description of this process is contained in 'letter 41' of his collected Latin correspondence.[51]The popular legend that he attempted the ancient methods of flame gazing, water gazing or both simultaneously is based on a naive reading of his first two verses, which merely liken his efforts to those of theDelphicandBranchidicoracles. The first of these is reproduced at the bottom of this article and the second can be seen by visiting the relevant facsimile site (see External Links). In his dedication to King Henry II, Nostradamus describes "emptying my soul, mind and heart of all care, worry and unease through mental calm and tranquility", but his frequent references to the "bronze tripod" of theDelphicrite are usually preceded by the words "as though" (compare, once again, External References to the original texts). Most of the quatrains deal with disasters, such as plagues, earthquakes, wars, floods, invasions, murders, droughts, and battles—all undated and based on foreshadowings by theMirabilis Liber. Some quatrains cover these disasters in overall terms; others concern a single person or small group of people. Some cover a single town, others several towns in several countries.[52]A major, underlying theme is an impending invasion of Europe by Muslim forces from farther east and south headed by the expectedAntichrist, directly reflecting the then-currentOttoman invasionsand the earlierSaracenequivalents, as well as the prior expectations of theMirabilis Liber.[53]All of this is presented in the context of the supposedly imminent end of the world—even though this is not in fact mentioned[54]—a conviction that sparked numerous collections ofend-time propheciesat the time, including an unpublished collection byChristopher Columbus.[55][56]Views on Nostradamus have varied widely throughout history.[57]Academic views, such as those of Jacques Halbronn, regard Nostradamus'sPropheciesas antedated forgeries written by later authors for political reasons.[57] Many of Nostradamus's supporters believe his prophecies are genuine.[57]Owing to the subjective nature of these interpretations, no two of them completely agree on what Nostradamus predicted, whether for the past or for the future.[57]Many supporters do agree, for example, that he predicted theGreat Fire of London, the French Revolution, the rise ofNapoleonand ofAdolf Hitler,[58][e]bothworld wars, andthe nuclear destructionofHiroshimaandNagasaki.[57][28]Popular authors frequently claim that he predicted whatever major event had just happened at the time of each of their books' publication, such as theApollo Moon landingin 1969, theSpace ShuttleChallengerdisasterin 1986, thedeath of Diana, Princess of Walesin 1997, and theSeptember 11 attackson theWorld Trade Centerin 2001.[28][59]This 'movable feast' aspect appears to be characteristic of the genre.[57] Possibly the first of these books to become popular in English was Henry C. Roberts'The Complete Prophecies of Nostradamusof 1947, reprinted at least seven times during the next forty years, which contained both transcriptions and translations, with brief commentaries. This was followed in 1961 (reprinted in 1982) by Edgar Leoni'sNostradamus and His Prophecies. After that cameErika Cheetham'sThe Prophecies of Nostradamus, incorporating a reprint of the posthumous 1568 edition, which was reprinted, revised and republished several times from 1973 onwards, latterly asThe Final Prophecies of Nostradamus. This served as the basis for the documentaryThe Man Who Saw Tomorrowand both did indeed mention possible generalised future attacks on New York (vianuclear weapons), though not specifically on the World Trade Center or on any particular date.[60] A two-part translation of Jean-Charles de Fontbrune'sNostradamus: historien et prophètewas published in 1980, and John Hogue has published a number of books on Nostradamus from about 1987, includingNostradamus and the Millennium: Predictions of the Future,Nostradamus: The Complete Prophecies(1999) andNostradamus: A Life and Myth(2003). In 1992 one commentator who claimed to be able to contact Nostradamus under hypnosis even had him "interpreting" his own verse X.6 (a prediction specifically about floods in southern France around the city of Nîmes and people taking refuge in itscollosse, or Colosseum, a Roman amphitheatre now known as theArènes) as a prediction of an undatedattack on the Pentagon, despite the historical seer's clear statement in his dedicatory letter to King Henri II that his prophecies were about Europe, North Africa and part of Asia Minor.[61] With the exception of Roberts, these books and their many popular imitators were almost unanimous not merely about Nostradamus's powers of prophecy but also in inventing intriguing aspects of his purported biography: that he had been a descendant of the Israelite tribe ofIssachar; he had been educated by his grandfathers, who had both been physicians to the court ofGood King RenéofProvence; he had attendedMontpellierUniversity in 1525 to gain his first degree; after returning there in 1529, he had successfully taken his medical doctorate; he had gone on to lecture in the Medical Faculty there, until his views became too unpopular; he had supported theheliocentricview of the universe; he had travelled to the Habsburg Netherlands, where he had composed prophecies at the abbey of Orval; in the course of his travels, he had performed a variety of prodigies, including identifying future Pope,Sixtus V, who was then only a seminary monk. He is credited with having successfully cured thePlagueatAix-en-Provenceand elsewhere; he had engaged inscrying, using either a magic mirror or a bowl of water; he had been joined by his secretary Chavigny at Easter 1554; having published the first installment of hisProphéties, he had been summoned by QueenCatherine de' Medicito Paris in 1556 specifically in order to discuss with her his prophecy at quatrain I.35 that her husbandKing Henri IIwould be killed in a duel; he had examined the royal children atBlois; he had bequeathed to his son a "lost book" of his own prophetic paintings;[f]he had been buried standing up; and he had been found, when dug up at the French Revolution, to be wearing a medallion bearing the exact date of his disinterment.[62]This was first recorded bySamuel Pepysas early as 1667, long before the French Revolution. Pepys records in his celebrateddiarya legend that, before his death, Nostradamus made the townsfolk swear that his grave would never be disturbed; but that 60 years later his body was exhumed, whereupon a brass plaque was found on his chest correctly stating the date and time when his grave would be opened and cursing the exhumers.[63] In 2000,Li Hongzhiclaimed that the 1999 prophecy at X.72 was a prediction of theChinese Falun Gong persecutionwhich began in July 1999, leading to an increased interest in Nostradamus amongFalun Gongmembers.[64] From the 1980s onward, an academic reaction set in, especially in France. The publication in 1983 of Nostradamus' private correspondence[65]and, during succeeding years, of the original editions of 1555 and 1557 discovered by Chomarat and Benazra, together with the unearthing of much original archival material[36][66]revealed that much that was claimed about Nostradamus did not fit the documented facts. The academics[36][62][66][67]revealed that not one of the claims just listed was backed up by any known contemporary documentary evidence. Most of them had evidently been based on unsourced rumours relayed as fact by much later commentators, such as Jaubert (1656), Guynaud (1693) and Bareste (1840); on modern misunderstandings of the 16th-century French texts; or on pure invention. Even the often-advanced suggestion that quatrain I.35 had successfully prophesied King Henry II's death did not actually appear in print for the first time until 1614, 55 years after the event.[68][69] Skeptics such asJames Randisuggest that his reputation as a prophet is largely manufactured by modern-day supporters who fit his words to events that have either already occurred or are so imminent as to be inevitable, a process sometimes known as "retroactive clairvoyance" (postdiction). No Nostradamus quatrain is known to have been interpreted as predicting a specific event before it occurred, other than in vague, general terms that could equally apply to any number of other events.[70]This even applies to quatrains that contain specific dates, such as III.77, which predicts "in 1727, in October, the king of Persia [shall be] captured by those of Egypt"—a prophecy that has, as ever, been interpreted retrospectively in the light of later events, in this case as though it presaged the known peace treaty between theOttoman EmpireandPersiaof that year;[71]Egypt was also an importantOttoman territoryat this time.[72]Similarly, Nostradamus's notorious "1999" prophecy at X.72 (seeNostradamus in popular culture) describes no event that commentators have succeeded in identifying either before or since, other than by twisting the words to fit whichever of the many contradictory happenings they claim as "hits".[73]Moreover, no quatrain suggests, as is often claimed by books and films on the allegedMayan Prophecy, that the world would end in December 2012.[74]In his preface to theProphecies, Nostradamus himself stated that his prophecies extend "from now to the year 3797"[75]—an extraordinary date which, given that the preface was written in 1555, may have more than a little to do with the fact that 2242 (3797–1555) had recently been proposed by his major astrological sourceRichard Roussatas a possible date for the end of the world.[76][77] Additionally, scholars have pointed out that almost all English translations of Nostradamus's quatrains are of extremely poor quality: they seem to display little or no knowledge of 16th-century French, aretendentious, and are sometimes intentionally altered in order to make them fit whatever events to which the translator believed they were supposed to refer (or vice versa).[78][67][79]None of them were based on the original editions: Roberts had based his writings on that of 1672, Cheetham and Hogue on the posthumous edition of 1568. Even Leoni accepted on page 115 that he had never seen an original edition, and on earlier pages, he indicated that much of his biographical material was unsourced.[80] None of this research and criticism was originally known to most of the English-language commentators, by dint of the dates when they were writing and, to some extent, the language in which it was written.[81]Hogue was in a position to take advantage of it, but it was only in 2003 that he accepted that some of his earlier biographical material had in fact been apocryphal. Meanwhile, some of the more recent sources listed (Lemesurier, Gruber, Wilson) have been particularly scathing about later attempts by some lesser-known authors and Internet enthusiasts to extract alleged hidden meanings from the texts, whether with the aid of anagrams, numerical codes, graphs or otherwise.[57] An additional indictment is found in a connection to Nazi propaganda. Goebbels reportedly adduced some of Nostradamus' work to be Third Reich references. This allegedly was done to make it look like the 1,000-year triumphant reign of the German people that was expected under Nazism had been prophesied by Nostradamus. In particular, a line referring to "that people which stands under the sign of the crooked cross" was added as an allusion to the German people standing under the Nazi flag with its swastika. Goebbels reportedly had that line inserted into leather-bound original volumes of Nostradamus' work, volumes that were then seeded in libraries across Nazi-occupied Europe so that the line would seem credible.[82] The prophecies retold and expanded by Nostradamus figured largely inpopular culturein the 20th and 21st centuries. As well as being the subject of hundreds of books (both fiction and nonfiction), Nostradamus' life has been depicted in several films and videos, and his life and writings continue to be a subject of media interest.
https://en.wikipedia.org/wiki/Nostradamus
Title 21 CFR Part 11is the part ofTitle 21of theCode of Federal Regulationsthat establishes the United StatesFood and Drug Administration(FDA) regulations on electronic records andelectronic signatures(ERES).Part 11, as it is commonly called, defines the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records (Title 21 CFR Part 11 Section 11.1 (a)).[1] 21 CFR, Part 11 applies todrug makers,medical devicemanufacturers,biotechcompanies,biologicsdevelopers,CROs, and other FDA-regulated industries, with some specific exceptions.[2]It requires that they implement controls, including audits, system validations,audit trails, electronic signatures, and documentation for software andsystemsinvolved in processing the electronic data that FDA predicate rules require them to maintain. A predicate rule is any requirement set forth in the Federal Food, Drug and Cosmetic Act, the Public Health Service Act, or any FDA regulation other than Part 11.[3] The rule also applies to submissions made to the FDA in electronic format (e.g., aNew Drug Application) but not to paper submissions by electronic methods (i.e.,faxes). It specifically does not require the 21 CFR Part 11 requirement for record retention for trackbacks by food manufacturers. Most food manufacturers are not otherwise explicitly required to keep detailed records, but electronic documentation kept forHACCPand similar requirements must meet these requirements. Broad sections of the regulation have been challenged as "very expensive and for some applications almost impractical",[4]and the FDA has stated in guidance that it will exercise enforcement discretion on many parts of the rule. This has led to confusion on exactly what is required, and the rule is being revised. In practice, the requirements on access controls are the only part routinely enforced[citation needed]. The "predicate rules", which required organizations to keep records in the first place, are still in effect. If electronic records are illegible, inaccessible, or corrupted, manufacturers are still subject to those requirements. If a regulated firm keeps "hard copies" of all required records, those paper documents can be considered the authoritative document for regulatory purposes, and the computer system is not in scope for electronic records requirements—though systems that control processes subject to predicate rules still require validation.[5]Firms should be careful to make a claim that the "hard copy" of required records is the authoritative document. For the "hard copy" produced from electronic source to be the authoritative document, it must be a complete and accurate copy of the electronic source. The manufacturer must use the hard copy (rather than electronic versions stored in the system) of the records for regulated activities. The current technical architecture of computer systems increasingly makes the Part 11, Electronic Records; Electronic Signatures — Scope and Application for the complete and accurate copy requirement extremely mandatory.[6] Various keynote speeches by FDA insiders early in the 21st century (in addition to high-profile audit findings focusing on computer system compliance) resulted in many companies scrambling to mount a defense against rule enforcement that they were procedurally and technologically unprepared for. Many software and instrumentation vendors released Part 11 "compliant" updates that were either incomplete or insufficient to fully comply with the rule. Complaints about the wasting of critical resources, non-value added aspects, in addition to confusion within the drug, medical device, biotech/biologic and other industries about the true scope and enforcement aspects of Part 11 resulted in the FDA release of: This document was intended to clarify how Part 11 should be implemented and would be enforced. But, as with all FDA guidances, it was not intended to convey the full force of law—rather, it expressed the FDA's "current thinking" on Part 11 compliance. Many within the industry, while pleased with the more limited scope defined in the guidance, complained that, in some areas, the 2003 guidance contradicted requirements in the 1997 Final Rule. In May 2007, the FDA issued the final version of their guidance on computerized systems in clinical investigations.[7] This guidance supersedes the guidance of the same name dated April 1999; and supplements the guidance for industry on Part 11, Electronic Records; Electronic Signatures — Scope and Application and the Agency's international harmonization efforts when applying these guidances to source data generated at clinical study sites. FDA had previously announced that a new Part 11 would be released late 2006. The Agency has since pushed that release date back. The FDA has not announced a revised time of release. John Murray, member of the Part 11 Working Group (the team at FDA developing the new Part 11), has publicly stated that the timetable for release is "flexible". 21 CFR Part 11 has the following benefits:
https://en.wikipedia.org/wiki/21_CFR_11
Incryptography,X.509is anInternational Telecommunication Union(ITU) standard defining the format ofpublic key certificates.[1]X.509 certificates are used in many Internet protocols, includingTLS/SSL, which is the basis forHTTPS,[2]the secure protocol for browsing theweb. They are also used in offline applications, likeelectronic signatures.[3] An X.509 certificate binds an identity to a public key using a digital signature. A certificate contains an identity (a hostname, or an organization, or an individual) and a public key (RSA,DSA,ECDSA,ed25519, etc.), and is either signed by a certificate authority or is self-signed. When a certificate is signed by a trusted certificate authority, or validated by other means, someone holding that certificate can use the public key it contains to establish secure communications with another party, or validate documentsdigitally signedby the correspondingprivate key. X.509 also definescertificate revocation lists, which are a means to distribute information about certificates that have been deemed invalid by a signing authority, as well as acertification path validation algorithm, which allows for certificates to be signed by intermediate CA certificates, which are, in turn, signed by other certificates, eventually reaching atrust anchor. X.509 is defined by the ITU's "Standardization Sector" (ITU-T'sSG17), in ITU-T Study Group 17 and is based onAbstract Syntax Notation One(ASN.1), another ITU-T standard. X.509 was initially issued on July 3, 1988, and was begun in association with theX.500standard. The first tasks of it was providing users with secure access to information resources and avoiding a cryptographicman-in-the-middle attack. It assumes a strict hierarchical system ofcertificate authorities(CAs) for issuing the certificates. This contrasts withweb of trustmodels, likePGP, where anyone (not just special CAs) may sign and thus attest to the validity of others' key certificates. Version 3 of X.509 includes the flexibility to support other topologies likebridgesandmeshes.[2]It can be used in a peer-to-peer,OpenPGP-like web of trust,[citation needed]but was rarely used that way as of 2004[update]. The X.500 system has only been implemented by sovereign nations[which?]for state identity information sharing treaty fulfillment purposes, and theIETF's Public-Key Infrastructure (X.509) (PKIX) working group has adapted the standard to the more flexible organization of the Internet. In fact, the termX.509 certificateusually refers to the IETF's PKIX certificate andCRLprofile of the X.509 v3 certificate standard, as specified inRFC5280, commonly called PKIX forPublic Key Infrastructure (X.509).[4] An early issue withPublic Key Infrastructure(PKI) and X.509 certificates was the well known "which directory" problem. The problem is the client does not know where to fetch missing intermediate certificates because the global X.500 directory never materialized. The problem was mitigated by including all intermediate certificates in a request. For example, early web servers only sent the web server's certificate to the client. Clients that lacked an intermediate CA certificate or where to find them failed to build a valid path from the CA to the server's certificate. To work around the problem, web servers now send all the intermediate certificates along with the web server's certificate.[5] While PKIX refers to the IETF's or Internet's PKI standard, there are many other PKIs with different policies. For example, the US Government has its own PKI with its own policies, and the CA/Browser Forum has its own PKI with its own policies. The US Government's PKI is a massive book of over 2500 pages. If an organization's PKI diverges too much from that of the IETF or CA/Browser Forum, then the organization risks losing interoperability with common tools likeweb browsers,cURL, andWget. For example, if a PKI has a policy of only issuing certificates on Monday, then common tools like cURL and Wget will not enforce the policy and allow a certificate issued on a Tuesday.[5] X.509 certificates bind an identity to a public key using a digital signature. In the X.509 system, there are two types of certificates. The first is a CA certificate. The second is an end-entity certificate. A CA certificate can issue other certificates. The top level, self-signed CA certificate is sometimes called the Root CA certificate. Other CA certificates are called intermediate CA or subordinate CA certificates. An end-entity certificate identifies the user, like a person, organization or business. An end-entity certificatecannotissue other certificates. An end-entity certificate is sometimes called a leaf certificate since no other certificates can be issued below it. An organization that wants a signed certificate requests one from a CA using a protocol likeCertificate Signing Request (CSR),Simple Certificate Enrollment Protocol (SCEP)orCertificate Management Protocol (CMP). The organization first generates akey pair, keeping theprivate keysecret and using it to sign the CSR. The CSR contains information identifying the applicant and the applicant'spublic keythat is used to verify the signature of the CSR - and theDistinguished Name(DN) that is unique for the person, organization or business. The CSR may be accompanied by other credentials or proofs of identity required by the certificate authority. The CSR will be validated using a Registration Authority (RA), and then thecertification authoritywill issue a certificate binding a public key to a particulardistinguished name. The roles registration authority and certification authority are usually separate business units underseparation of dutiesto reduce the risk of fraud. An organization's trustedroot certificatescan be distributed to all employees so that they can use the company PKI system. Browsers such asInternet Explorer,Firefox,Opera,SafariandChromecome with a predetermined set of root certificates pre-installed, soSSLcertificates from major certificate authorities will work instantly; in effect the browsers' developers determine which CAs are trusted third parties for the browsers' users. For example, Firefox provides a CSV and/or HTML file containing a list of Included CAs.[8] X.509 andRFC5280also include standards for certificaterevocation list(CRL) implementations. AnotherIETF-approved way of checking a certificate's validity is theOnline Certificate Status Protocol(OCSP).Firefox 3.0enabled OCSP checking by default, as did versions ofWindowsfrom at leastVistaand later.[9] The structure foreseen by the standards is expressed in a formal language,Abstract Syntax Notation One(ASN.1). The structure of an X.509 v3digital certificateis as follows: The Extensions field, if present, is a sequence of one or more certificate extensions.[10]: §4.1.2.9: ExtensionsEach extension has its own unique ID, expressed asobject identifier (OID), which is a set of values, together with either a critical or non-critical indication. A certificate-using system must reject the certificate if it encounters a critical extension that it does not recognize, or a critical extension that contains information that it cannot process. A non-critical extension may be ignored if it is not recognized, but must be processed if it is recognized.[10]: §4.2: Certificate Extensions The structure of version 1 is given inRFC1422. The inner format of issuer and subject unique identifiers specified inX.520 The Directory: Selected attribute typesrecommendation. ITU-T introduced issuer and subject unique identifiers in version 2 to permit the reuse of issuer or subject name after some time. An example of reuse will be when aCAgoes bankrupt and its name is deleted from the country's public list. After some time another CA with the same name may register itself, even though it is unrelated to the first one. However,IETFrecommends that no issuer and subject names be reused. Therefore, version 2 is not widely deployed in the Internet.[citation needed] Extensions were introduced in version 3. A CA can use extensions to issue a certificate only for a specific purpose (e.g. only forsigning digital objects). In all versions, the serial number must be unique for each certificate issued by a specific CA (as mentioned inRFC5280). RFC5280(and its predecessors) defines a number of certificate extensions which indicate how the certificate should be used. Most of them are arcs from thejoint-iso-ccitt(2) ds(5) id-ce(29)OID. Some of the most common, defined in section 4.2.1, are: In general when usingRFC5280, if a certificate has several extensions restricting its use, all restrictions must be satisfied for a given use to be appropriate. The RFC gives the specific example of a certificate containing both keyUsage and extendedKeyUsage: in this case, both must be processed and the certificate can only be used if both extensions are coherent in specifying the usage of a certificate. For example,NSSuses both extensions to specify certificate usage.[11] Certification authorities operating under the CA/Browser Forum's PKI issue certificates with varying levels of validation. The different validations provide different levels of assurances that a certificate represents what it is supposed to. For example, a web server can be validated at the lowest level of assurances using an email calledDomain Validation (DV). Or a web server can be validated at a higher level of assurances using more detailed methods calledExtended Validation (EV). In practice, a DV certificate means a certificate was issued for a domain likeexample.comafter someone responded to an email sent towebmaster@example.com. An EV certificate means a certificate was issued for a domain likeexample.com, and a company like Example, LLC is the owner of the domain, and the owner was verified byArticles of Incorporation. Extended validation does not add any additional security controls, so the secure channel setup using an EV certificate is not "stronger" than a channel setup using a different level of validation like DV. Extended validation is signaled in a certificate using X.509 v3 extension. Each CA uses a differentObject Identifier (OID)to assert extended validation. There is no single OID to indicate extended validation, which complicates user agent programming. Each user agent must have a list of OIDs that indicate extended validation. The CA/Browser Forum's PKI recognizes extended validation and many browsers provide visual feedback to the user to indicate a site provides an EV certificate. Other PKIs, like the Internet's PKI (PKIX), do not place any special emphasis on extended validation. Tools using PKIX policies, like cURL and Wget, simply treat an EV certificate like any other certificate. Security expertPeter Gutmannstates CA's created EV certificates to restore profit levels after theRace to the Bottomcut into profits. During the race to the bottom CA's cut prices to lure consumers to purchase their certificates. As a result, profits were reduced and CA's dropped the level of validation they were performing to the point there were nearly no assurances on a certificate.[5] There are several commonly usedfilename extensionsfor X.509 certificates. Some of these extensions are also used for other data such as private keys. PKCS#7is a standard for signing or encrypting (officially called "enveloping") data. Since the certificate is needed to verify signed data, it is possible to include them in the SignedData structure. Acertificate chain(see the equivalent concept of "certification path" defined byRFC5280section 3.2) is a list of certificates (usually starting with an end-entity certificate) followed by one or moreCAcertificates (usually the last one being a self-signed certificate), with the following properties: Certificate chains are used in order to check that the public key (PK) contained in a target certificate (the first certificate in the chain) and other data contained in it effectively belongs to its subject. In order to ascertain this, the signature on the target certificate is verified by using the PK contained in the following certificate, whose signature is verified using the next certificate, and so on until the last certificate in the chain is reached. As the last certificate is a trust anchor, successfully reaching it will prove that the target certificate can be trusted. The description in the preceding paragraph is a simplified view on thecertification path validation processas defined byRFC5280section 6, which involves additional checks, such as verifying validity dates on certificates, looking upCRLs, etc. Examining how certificate chains are built and validated, it is important to note that a concrete certificate can be part of very different certificate chains (all of them valid). This is because several CA certificates can be generated for the same subject and public key, but be signed with different private keys (from different CAs or different private keys from the same CA). So, although a single X.509 certificate can have only one issuer and one CA signature, it can be validly linked to more than one certificate, building completely different certificate chains. This is crucial for cross-certification between PKIs and other applications.[13]See the following examples: In these diagrams: In order to manage that user certificates existing in PKI 2 (like "User 2") are trusted by PKI 1, CA1 generates a certificate (cert2.1) containing the public key of CA2.[14]Now both "cert2 and cert2.1 (in green) have the same subject and public key, so there are two valid chains for cert2.2 (User 2): "cert2.2 → cert2" and "cert2.2 → cert2.1 → cert1". Similarly, CA2 can generate a certificate (cert1.1) containing the public key of CA1 so that user certificates existing in PKI 1 (like "User 1") are trusted by PKI 2. Understanding Certification Path Construction(PDF). PKI Forum. September 2002.To allow for graceful transition from the old signing key pair to the new signing key pair, the CA should issue a certificate that contains the old public key signed by the new private signing key and a certificate that contains the new public key signed by the old private signing key. Both of these certificates are self-issued, but neither isself-signed. Note that these are in addition to the two self-signed certificates (one old, one new). Since both cert1 and cert3 contain the same public key (the old one), there are two valid certificate chains for cert5: "cert5 → cert1" and "cert5 → cert3 → cert2", and analogously for cert6. This allows that old user certificates (such as cert5) and new certificates (such as cert6) can be trusted indifferently by a party having either the new root CA certificate or the old one as trust anchor during the transition to the new CA keys.[15] This is an example of a decoded X.509 certificate that was used in the past by wikipedia.org and several other Wikipedia websites. It was issued byGlobalSign, as stated in the Issuer field. Its Subject field describes Wikipedia as an organization, and its Subject Alternative Name (SAN) field for DNS describes the hostnames for which it could be used. The Subject Public Key Info field contains anECDSApublic key, while the signature at the bottom was generated by GlobalSign'sRSAprivate key. (The signatures in these examples are truncated.) To validate this end-entity certificate, one needs an intermediate certificate that matches its Issuer and Authority Key Identifier: In a TLS connection, a properly-configured server would provide the intermediate as part of the handshake. However, it's also possible to retrieve the intermediate certificate by fetching the "CA Issuers" URL from the end-entity certificate. This is an example of an intermediate certificate belonging to acertificate authority. This certificate signed the end-entity certificate above, and was signed by the root certificate below. Note that the subject field of this intermediate certificate matches the issuer field of the end-entity certificate that it signed. Also, the "subject key identifier" field in the intermediate matches the "authority key identifier" field in the end-entity certificate. This is an example of aself-signedroot certificate representing acertificate authority. Its issuer and subject fields are the same, and its signature can be validated with its own public key. Validation of the trust chain has to end here. If the validating program has this root certificate in itstrust store, the end-entity certificate can be considered trusted for use in a TLS connection. Otherwise, the end-entity certificate is considered untrusted. There are a number of publications about PKI problems byBruce Schneier,Peter Gutmannand other security experts.[17][18][19] Implementations suffer from design flaws, bugs, different interpretations of standards and lack of interoperability of different standards. Some problems are: Digital signature systems depend on securecryptographic hash functionsto work. When a public key infrastructure allows the use of a hash function that is no longer secure, an attacker can exploit weaknesses in the hash function to forge certificates. Specifically, if an attacker is able to produce ahash collision, they can convince a CA to sign a certificate with innocuous contents, where the hash of those contents is identical to the hash of another, malicious set of certificate contents, created by the attacker with values of their choosing. The attacker can then append the CA-provided signature to their malicious certificate contents, resulting in a malicious certificate that appears to be signed by the CA. Because the malicious certificate contents are chosen solely by the attacker, they can have different validity dates or hostnames than the innocuous certificate. The malicious certificate can even contain a "CA: true" field making it able to issue further trusted certificates. Exploiting a hash collision to forge X.509 signatures requires that the attacker be able to predict the data that the certificate authority will sign. This can be somewhat mitigated by the CA generating a random component in the certificates it signs, typically the serial number. TheCA/Browser Forumhas required serial number entropy in its Baseline Requirements Section 7.1 since 2011.[38] As of January 1, 2016[update], the Baseline Requirements forbid issuance of certificates using SHA-1. As of early 2017[update], Chrome[39]and Firefox[40]reject certificates that use SHA-1. As of May 2017[update]both Edge[41]and Safari[42]are also rejecting SHA-1 certificate. OpenSSL began rejecting SHA-1 certificates by default in version 3.0, released September 2021.[43] In 1995, theInternet Engineering Task Forcein conjunction with theNational Institute of Standards and Technology[48]formed the Public-Key Infrastructure (X.509) working group. The working group, concluded in June 2014,[49]is commonly referred to as "PKIX." It producedRFCsand other standards documentation on using and deploying X.509 in practice. In particular it producedRFC3280and its successor RFC 5280, which define how to use X.509 in Internet protocols. TLS/SSLandHTTPSuse theRFC5280profile of X.509, as doS/MIME(Secure Multipurpose Internet Mail Extensions) and theEAP-TLSmethod for WiFi authentication. Any protocol that uses TLS, such as SMTP, POP, IMAP, LDAP, XMPP, and many more, inherently uses X.509. IPseccan use theRFC4945profile for authenticating peers. TheOpenCable security specificationdefines its own profile of X.509 for use in the cable industry. Devices likesmart cardsandTPMsoften carry certificates to identify themselves or their owners. These certificates are in X.509 form. TheWS-Securitystandard defines authentication either through TLS or through its own certificate profile.[16]Both methods use X.509. TheMicrosoft Authenticodecode signing system uses X.509 to identify authors of computer programs.Secure Bootfeature ofUEFIuses X.509 to authenticate UEFI drivers or bootloaders duringbootingand disallow blocklisted drivers or bootloaders (by using Forbidden Key Exchange or dbx database).[50] TheOPC UAindustrial automation communication standard uses X.509. SSHgenerally uses aTrust On First Usesecurity model and doesn't have need for certificates. However, the popular OpenSSH implementation does support a CA-signed identity model based on its own non-X.509 certificate format.[51]
https://en.wikipedia.org/wiki/X.509
Anadvanced electronic signature(AESorAdES) is anelectronic signaturethat has met the requirements set forth underEU RegulationNo 910/2014 (eIDAS-regulation) on electronic identification and trust services for electronic transactions in theEuropean Single Market.[1] eIDAS created standards for the use ofelectronic signaturesso that they could be used securely when conducting business online, such as anelectronic fund transferor official business across borders withEU Member States.[2]The advanced electronic signature is one of the standards outlined in eIDAS. For an electronic signature to be considered as advanced it must meet several requirements:[3][4] Advanced electronic signatures that are compliant with eIDAS may be technically implemented through the Ades Baseline Profiles that have been developed by theEuropean Telecommunications Standards Institute(ETSI):[3] The implementation of advanced electronic signatures under the specification of eIDAS serves several purposes. Business andpublic servicesprocesses, even those that go across borders can be safely expedited by using electronic signing. With eIDAS, EU States are required to establish "points of single contact" (PSCs) for trust services that ensure the electronic ID schemes can be used in public sector transactions that occur cross-borders, including access to healthcare information across borders.[3] In the past, when signing a document or message, the signatory would sign it and then return it to its intended recipient through the postal service, viafacsimileservice, or by scanning and attaching it to an email. This could lead to delays and, of course, the possibility that signatures could be forged and documents altered, especially when multiple signatures from different people located in different locations are required. The process of using an advanced electronic signature saves time, is legally binding and assures a high level of technical security.[3][7] Following Article 25 (1) of the eIDAS regulation,[3]an advanced electronic signature shall "not be denied legal effect and admissibility as evidence in legal proceedings". However it will reach a higherprobative valuewhen enhanced to the level of aqualified electronic signature. By adding a certificate that has been issued by a qualifiedtrust service providerthat attests to the authenticity of the qualified signature, the upgraded advanced signature then carries according to Article 24 (2) of the eIDAS Regulation[3]the same legal value as a handwritten signature.[1]However, this is only regulated in the European Union and similarly throughZertESin Switzerland. A qualified electronic signature is not defined in the United States.[8][9]
https://en.wikipedia.org/wiki/Advanced_electronic_signature
Incryptographyablind signature, as introduced byDavid Chaum,[1]is a form ofdigital signaturein which the content of a message is disguised (blinded) before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems anddigital cashschemes. An often-used analogy to the cryptographic blind signature is the physical act of a voter enclosing a completed anonymous ballot in a specialcarbon paperlined envelope that has the voter's credentials pre-printed on the outside. An official verifies the credentials and signs the envelope, thereby transferring his signature to the ballot inside via the carbon paper. Once signed, the package is given back to the voter, who transfers the now signed ballot to a new unmarked normal envelope. Thus, the signer does not view the message content, but a third party can later verify the signature and know that the signature is valid within the limitations of the underlying signature scheme. Blind signatures can also be used to provideunlinkability, which prevents the signer from linking the blinded message it signs to a later un-blinded version that it may be called upon to verify. In this case, the signer's response is first "un-blinded" prior to verification in such a way that the signature remains valid for the un-blinded message. This can be useful in schemes whereanonymityis required. Blind signature schemes can be implemented using a number of commonpublic keysigning schemes, for instanceRSAandDSA. To perform such a signature, the message is first "blinded", typically by combining it in some way with a random "blinding factor". The blinded message is passed to a signer, who then signs it using a standard signing algorithm. The resulting message, along with the blinding factor, can be later verified against the signer's public key. In some blind signature schemes, such as RSA, it is even possible to remove the blinding factor from the signature before it is verified. In these schemes, the final output (message/signature) of the blind signature scheme is identical to that of the normal signing protocol. Blind signature schemes see a great deal of use in applications where sender privacy is important. This includes various "digital cash" schemes andvoting protocols. For example, the integrity of some electronic voting system may require that each ballot be certified by an election authority before it can be accepted for counting; this allows the authority to check the credentials of the voter to ensure that they are allowed to vote, and that they are not submitting more than one ballot. Simultaneously, it is important that this authority does not learn the voter's selections. An unlinkable blind signature provides this guarantee, as the authority will not see the contents of any ballot it signs, and will be unable to link the blinded ballots it signs back to the un-blinded ballots it receives for counting. Blind signature schemes exist for many public key signing protocols. More formally a blind signature scheme is acryptographic protocolthat involves two parties, a user Alice that wants to obtain signatures on her messages, and a signer Bob that is in possession of his secret signing key. At the end of the protocol Alice obtains Bob’s signature onmwithout Bob learning anything about the message. This intuition of not learning anything is hard to capture in mathematical terms. The usual approach is to show that for every (adversarial) signer, there exists a simulator that can output the same information as the signer. This is similar to the way zero-knowledge is defined inzero-knowledge proofsystems. [2]: 235 One of the simplest blind signature schemes is based on RSA signing. A traditional RSA signature is computed by raising the messagemto the secret exponentdmodulo the public modulusN. The blind version uses a random valuer, such thatrisrelatively primetoN(i.e.gcd(r,N) = 1).ris raised to the public exponentemoduloN, and the resulting valueremodN{\displaystyle r^{e}{\bmod {N}}}is used as a blinding factor. The author of the message computes the product of the message and blinding factor, i.e.: and sends the resulting valuem′{\displaystyle m'}to the signing authority. Becauseris a random value and the mappingr↦remodN{\displaystyle r\mapsto r^{e}{\bmod {N}}}is a permutation it follows thatremodN{\displaystyle r^{e}{\bmod {N}}}is random too. This implies thatm′{\displaystyle m'}does not leak any information aboutm. The signing authority then calculates the blinded signatures'as: s'is sent back to the author of the message, who can then remove the blinding factor to reveals, the valid RSA signature ofm: This works because RSA keys satisfy the equationred≡r(modN){\displaystyle r^{ed}\equiv r{\pmod {N}}}and thus hencesis indeed the signature ofm. In practice, the property that signing one blinded message produces at most one valid signed message is usually desired. This means one vote per signed ballot in elections, for example. This property does not hold for the simple scheme described above: the original message and the unblinded signature is valid, but so is the blinded message and the blind signature, and possibly other combinations given a clever attacker. A solution to this is to blind sign a cryptographic hash of the message, not the message itself.[3] RSAis subject to the RSA blinding attack through which it is possible to be tricked into decrypting a message by blind signing another message. Since the signing process is equivalent to decrypting with the signer's secret key, an attacker can provide a blinded version of a messagem{\displaystyle m}encrypted with the signer's public key,m′{\displaystyle m'}for them to sign. The encrypted message would usually be some secret information which the attacker observed being sent encrypted under the signer's public key which the attacker wants to learn more about. When the attacker removes the blindness the signed version will have the clear text: wherem′{\displaystyle m'}is the encrypted version of the message. When the message is signed, the cleartextm{\displaystyle m}is easily extracted: Note thatϕ(n){\displaystyle \phi (n)}refers toEuler's totient function. The message is now easily obtained. This attack works because in this blind signature scheme the signer signs the message directly. By contrast, in an unblinded signature scheme the signer would typically use a padding scheme (e.g. by instead signing the result of acryptographic hash functionapplied to the message, instead of signing the message itself), however since the signer does not know the actual message, any padding scheme would produce an incorrect value when unblinded. Due to this multiplicative property of RSA, the same key should never be used for both encryption and signing purposes.
https://en.wikipedia.org/wiki/Blind_signature
Adetached signatureis a type ofdigital signaturethat is kept separate from its signed data, as opposed to bundled together into a single file. This approach offers several advantages, such as preventing unauthorized modifications to the original data objects. However, there is a risk that the detached signature could become separated from itsassociateddata, making the data inaccessible. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Detached_signature
Incryptography, apublic key certificate, also known as adigital certificateoridentity certificate, is anelectronic documentused to prove the validity of apublic key.[1][2]The certificate includes the public key and information about it, information about the identity of its owner (called the subject), and thedigital signatureof an entity that has verified the certificate's contents (called the issuer). If the device examining the certificate trusts the issuer and finds the signature to be a valid signature of that issuer, then it can use the included public key to communicate securely with the certificate's subject. Inemail encryption,code signing, ande-signaturesystems, a certificate's subject is typically a person or organization. However, inTransport Layer Security(TLS) a certificate's subject is typically a computer or other device, though TLS certificates may identify organizations or individuals in addition to their core role in identifying devices. TLS, sometimes called by its older nameSecure Sockets Layer(SSL), is notable for being a part ofHTTPS, aprotocolfor securely browsing theweb. In a typicalpublic-key infrastructure(PKI) scheme, the certificate issuer is acertificate authority(CA),[3]usually a company that charges customers a fee to issue certificates for them. By contrast, in aweb of trustscheme, individuals sign each other's keys directly, in a format that performs a similar function to a public key certificate. In case of key compromise, a certificate may need to berevoked. The most common format for public key certificates is defined byX.509. Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such asPublic Key Infrastructure (X.509)as defined inRFC5280. TheTransport Layer Security(TLS) protocol – as well as its outdated predecessor, theSecure Sockets Layer(SSL) protocol – ensures that the communication between aclient computerand aserveris secure. The protocol requires the server to present a digital certificate, proving that it is the intended destination. The connecting client conductscertification path validation, ensuring that: TheSubjectfield of the certificate must identify the primary hostname of the server as theCommon Name.[clarification needed]The hostname must be publicly accessible, not usingprivate addressesorreserved domains.[4]A certificate may be valid for multiple hostnames (e.g., a domain and its subdomains). Such certificates are commonly calledSubject Alternative Name (SAN) certificatesorUnified Communications Certificates (UCC). These certificates contain theSubject Alternative Namefield, though many CAs also put them into theSubject Common Namefield for backward compatibility. If some of the hostnames contain an asterisk (*), a certificate may also be called awildcard certificate. Once the certification path validation is successful, the client can establish an encrypted connection with the server. Internet-facing servers, such as publicweb servers, must obtain their certificates from a trusted, public certificate authority (CA). Client certificates authenticate the client connecting to a TLS service, for instance to provide access control. Because most services provide access to individuals, rather than devices, most client certificates contain an email address or personal name rather than a hostname. In addition, the certificate authority that issues the client certificate is usually the service provider to which client connects because it is the provider that needs to perform authentication. Some service providers even offer free SSL certificates as part of their packages.[5] While most web browsers support client certificates, the most common form of authentication on the Internet is a username and password pair. Client certificates are more common invirtual private networks(VPN) andRemote Desktop Services, where they authenticate devices. In accordance with theS/MIMEprotocol, email certificates can both establish the message integrity and encrypt messages. To establish encrypted email communication, the communicating parties must have their digital certificates in advance. Each must send the other one digitally signed email and opt to import the sender's certificate. Some publicly trusted certificate authorities provide email certificates, but more commonly S/MIME is used when communicating within a given organization, and that organization runs its own CA, which is trusted by participants in that email system. Aself-signed certificateis a certificate with a subject that matches its issuer, and a signature that can be verified by its own public key. Self-signed certificates have their own limited uses. They have full trust value when the issuer and the sole user are the same entity. For example, the Encrypting File System on Microsoft Windows issues a self-signed certificate on behalf of the encrypting user and uses it to transparently decrypt data on the fly. The digital certificate chain of trust starts with a self-signed certificate, called aroot certificate,trust anchor, ortrust root. A certificate authority self-signs a root certificate to be able to sign other certificates. An intermediate certificate has a similar purpose to the root certificate – its only use is to sign other certificates. However, an intermediate certificate is not self-signed. A root certificate or another intermediate certificate needs to sign it. An end-entity or leaf certificate is any certificate that cannot sign other certificates. For instance, TLS/SSL server and client certificates, email certificates, code signing certificates, and qualified certificates are all end-entity certificates. Subject Alternative Name (SAN) certificates are anextensiontoX.509that allows various values to be associated with a security certificate using asubjectAltNamefield.[6]These values are calledSubject Alternative Names(SANs). Names include:[7] RFC2818(May 2000) specifies Subject Alternative Names as the preferred method of adding DNS names to certificates, deprecating the previous method of putting DNS names in thecommonNamefield.[8]Google Chromeversion 58 (March 2017) removed support for checking thecommonNamefield at all, instead only looking at the SANs.[8] As shown in the picture of Wikimedia's section on the right, the SAN field can contain wildcards.[9]Not all vendors support or endorse mixing wildcards into SAN certificates.[10] A public key certificate which uses anasterisk*(thewildcard) in itsdomain namefragment is called a Wildcard certificate. Through the use of*, a single certificate may be used for multiplesub-domains. It is commonly used fortransport layer securityincomputer networking. For example, a single wildcard certificate forhttps://*.example.comwill secure all these subdomains on thehttps://*.example.comdomain: Instead of getting separate certificates for subdomains, you can use a single certificate for all main domains and subdomains and reduce cost.[11] Because the wildcard only covers one level of subdomains (the asterisk doesn't match full stops),[12]these domains would not be valid for the certificates:[13] Note possible exceptions by CAs, for example wildcard-plus cert by DigiCert contains an automatic "Plus" property for the naked domainexample.com.[citation needed] Only a single level ofsubdomainmatching is supported in accordance withRFC2818.[14] It is not possible to get a wildcard for anExtended Validation Certificate.[15]A workaround could be to add every virtual host name in theSubject Alternative Name(SAN) extension,[16][17]the major problem being that the certificate needs to be reissued whenever a new virtual server is added. (SeeTransport Layer Security § Support for name-based virtual serversfor more information.) Wildcards can be added as domains in multi-domain certificates orUnified Communications Certificates(UCC). In addition, wildcards themselves can havesubjectAltNameextensions, including other wildcards. For example, the wildcard certificate*.wikipedia.orghas*.m.wikimedia.orgas a Subject Alternative Name. Thus it secureswww.wikipedia.orgas well as the completely different website namemeta.m.wikimedia.org.[18] RFC6125argues against wildcard certificates on security grounds, in particular "partial wildcards".[19] The wildcard applies only to one level of the domain name.*.example.commatchessub1.example.combut notexample.comand notsub2.sub1.domain.com The wildcard may appear anywhere inside a label as a "partial wildcard" according to early specifications[20] However, use of "partial-wildcard" certs is not recommended. As of 2011, partial wildcard support is optional, and is explicitly disallowed in SubjectAltName headers that are required for multi-name certificates.[21]All major browsers have deliberatelyremovedsupport for partial-wildcard certificates;[22][23]they will result in a "SSL_ERROR_BAD_CERT_DOMAIN" error. Similarly, it is typical for standard libraries in programming languages to not support "partial-wildcard" certificates. For example, any "partial-wildcard" certificate will not work with the latest versions of both Python[24]and Go. Thus, Do not allow a label that consists entirely of just a wildcard unless it is the left-most label A cert with multiple wildcards in a name is not allowed. A cert with*plus a top-level domain is not allowed. Too general and should not be allowed. International domain names encoded in ASCII (A-label) are labels that areASCII-encodedand begin withxn--. URLs with international labels cannot contain wildcards.[25] These are some of the most common fields in certificates. Most certificates contain a number of fields not listed here. Note that in terms of a certificate's X.509 representation, a certificate is not "flat" but contains these fields nested in various structures within the certificate. This is an example of a decoded SSL/TLS certificate retrieved from SSL.com's website. The issuer's common name (CN) is shown asSSL.com EV SSL Intermediate CA RSA R3, identifying this as anExtended Validation(EV) certificate. Validated information about the website's owner (SSL Corp) is located in theSubjectfield. TheX509v3 Subject Alternative Namefield contains a list of domain names covered by the certificate. TheX509v3 Extended Key UsageandX509v3 Key Usagefields show all appropriate uses. In the European Union,(advanced) electronic signatureson legal documents are commonly performed usingdigital signatureswith accompanying identity certificates. However, onlyqualified electronic signatures(which require using a qualified trust service provider and signature creation device) are given the same power as a physical signature. In theX.509trust model, a certificate authority (CA) is responsible for signing certificates. These certificates act as an introduction between two parties, which means that a CA acts as a trusted third party. A CA processes requests from people or organizations requesting certificates (called subscribers), verifies the information, and potentially signs an end-entity certificate based on that information. To perform this role effectively, a CA needs to have one or more broadly trusted root certificates or intermediate certificates and the corresponding private keys. CAs may achieve this broad trust by having their root certificates included in popular software, or by obtaining a cross-signature from another CA delegating trust. Other CAs are trusted within a relatively small community, like a business, and are distributed by other mechanisms like WindowsGroup Policy. Certificate authorities are also responsible for maintaining up-to-date revocation information about certificates they have issued, indicating whether certificates are still valid. They provide this information throughOnline Certificate Status Protocol(OCSP) and/or Certificate Revocation Lists (CRLs). Some of the larger certificate authorities in the market includeIdenTrust,DigiCert, andSectigo.[29] Some major software contain a list of certificate authorities that are trusted by default.[citation needed]This makes it easier for end-users to validate certificates, and easier for people or organizations that request certificates to know which certificate authorities can issue a certificate that will be broadly trusted. This is particularly important in HTTPS, where a web site operator generally wants to get a certificate that is trusted by nearly all potential visitors to their web site. The policies and processes a provider uses to decide which certificate authorities their software should trust are called root programs. The most influential root programs are:[citation needed] Browsers other than Firefox generally use the operating system's facilities to decide which certificate authorities are trusted. So, for instance, Chrome on Windows trusts the certificate authorities included in the Microsoft Root Program, while on macOS or iOS, Chrome trusts the certificate authorities in the Apple Root Program.[30]Edge and Safari use their respective operating system trust stores as well, but each is only available on a single OS. Firefox uses the Mozilla Root Program trust store on all platforms. The Mozilla Root Program is operated publicly, and its certificate list is part of theopen sourceFirefox web browser, so it is broadly used outside Firefox.[citation needed]For instance, while there is no common Linux Root Program, many Linux distributions, like Debian,[31]include a package that periodically copies the contents of the Firefox trust list, which is then used by applications. Root programs generally provide a set of valid purposes with the certificates they include. For instance, some CAs may be considered trusted for issuing TLS server certificates, but not for code signing certificates. This is indicated with a set of trust bits in a root certificate storage system. A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or misissued certificate until expiry.[32]Hence, revocation is an important part of apublic key infrastructure.[33]Revocation is performed by the issuingcertificate authority, which produces acryptographically authenticatedstatement of revocation.[34] For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[35]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[36] Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[37]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[33] The most common use of certificates is forHTTPS-based web sites. Aweb browservalidates that an HTTPSweb serveris authentic, so that the user can feel secure that his/her interaction with theweb sitehas no eavesdroppers and that the web site is who it claims to be. This security is important forelectronic commerce. In practice, a web site operator obtains a certificate by applying to a certificate authority with acertificate signing request. The certificate request is an electronic document that contains the web site name, company information and the public key. The certificate provider signs the request, thus producing a public certificate. During web browsing, this public certificate is served to any web browser that connects to the web site and proves to the web browser that the provider believes it has issued a certificate to the owner of the web site. As an example, when a user connects tohttps://www.example.com/with their browser, if the browser does not give any certificate warning message, then the user can be theoretically sure that interacting withhttps://www.example.com/is equivalent to interacting with the entity in contact with the email address listed in the public registrar under "example.com", even though that email address may not be displayed anywhere on the web site.[citation needed]No other surety of any kind is implied. Further, the relationship between the purchaser of the certificate, the operator of the web site, and the generator of the web site content may be tenuous and is not guaranteed.[citation needed]At best, the certificate guarantees uniqueness of the web site, provided that the web site itself has not been compromised (hacked) or the certificate issuing process subverted. A certificate provider can opt to issue three types of certificates, each requiring its own degree of vetting rigor. In order of increasing rigor (and naturally, cost) they are: Domain Validation, Organization Validation and Extended Validation. These rigors are loosely agreed upon by voluntary participants in theCA/Browser Forum.[citation needed] A certificate provider will issue a domain-validated (DV) certificate to a purchaser if the purchaser can demonstrate one vetting criterion: the right to administratively manage the affected DNS domain(s). A certificate provider will issue an organization validation (OV) class certificate to a purchaser if the purchaser can meet two criteria: the right to administratively manage the domain name in question, and perhaps, the organization's actual existence as a legal entity. A certificate provider publishes its OV vetting criteria through itscertificate policy. To acquire anExtended Validation(EV) certificate, the purchaser must persuade the certificate provider of its legal identity, including manual verification checks by a human. As with OV certificates, a certificate provider publishes its EV vetting criteria through itscertificate policy. Until 2019, major browsers such as Chrome and Firefox generally offered users a visual indication of the legal identity when a site presented an EV certificate. This was done by showing the legal name before the domain, and a bright green color to highlight the change. Most browsers deprecated this feature[38][39]providing no visual difference to the user on the type of certificate used. This change followed security concerns raised by forensic experts and successful attempts to purchase EV certificates to impersonate famous organizations, proving the inefficiency of these visual indicators and highlighting potential abuses.[40] Aweb browserwill give no warning to the user if a web site suddenly presents a different certificate, even if that certificate has a lower number of key bits, even if it has a different provider, and even if the previous certificate had an expiry date far into the future.[citation needed]Where certificate providers are under the jurisdiction of governments, those governments may have the freedom to order the provider to generate any certificate, such as for the purposes of law enforcement. Subsidiary wholesale certificate providers also have the freedom to generate any certificate. All web browsers come with an extensive built-in list of trustedroot certificates, many of which are controlled by organizations that may be unfamiliar to the user.[1]Each of these organizations is free to issue any certificate for any web site and have the guarantee that web browsers that include its root certificates will accept it as genuine. In this instance, end users must rely on the developer of the browser software to manage its built-in list of certificates and on the certificate providers to behave correctly and to inform the browser developer of problematic certificates. While uncommon, there have been incidents in which fraudulent certificates have been issued: in some cases, the browsers have detected the fraud; in others, some time passed before browser developers removed these certificates from their software.[41][42] The list of built-in certificates is also not limited to those provided by the browser developer: users (and to a degree applications) are free to extend the list for special purposes such as for company intranets.[43]This means that if someone gains access to a machine and can install a new root certificate in the browser, that browser will recognize websites that use the inserted certificate as legitimate. Forprovable security, this reliance on something external to the system has the consequence that any public key certification scheme has to rely on some special setup assumption, such as the existence of acertificate authority.[44] In spite of the limitations described above, certificate-authenticated TLS is considered mandatory by all security guidelines whenever a web site hosts confidential information or performs material transactions. This is because, in practice, in spite of theweaknessesdescribed above, web sites secured by public key certificates are still more secure than unsecuredhttp://web sites.[45] The National Institute of Standards and Technology (NIST) Computer Security Division[46]provides guidance documents for public key certificates:
https://en.wikipedia.org/wiki/Public_key_certificate
Electronic signatureallows users to electronically perform the actions for which they previously had to give a signature on paper.Estonia's digital signature system is the foundation for some of its most populare-servicesincluding registering a company online, e-banks, thee-votingsystem and electronic tax filing – essentially any services that require signatures to prove their validity.[1][2] The first digital signature was given in 2002. A number offreewareprograms were released to end users andsystem integrators. All of the components of the software processed the same document format – theDigiDoc format.[3] As of October 2013, over 130 million digital signatures have been given in Estonia.[4] In September 2013 theEuropean Commissioner for Digital AgendaNeelie Kroesgave her first digital signature with an Estonian test ID-card issued to her as a present.[5][6] In October 2014 Estonian parliament passed a bill which gives any person, regardless of their citizenship or residency, possibility to apply for Estoniandigital identity(e-Residency of Estonia) to give digital signatures and use Estonian government online services.[7]The law came into force on December 1, 2014. The nature and use of digital signature in Estonia is regulated by the Digital Signature Act. The Estonian parliamentRiigikogupassed the Digital Signature Act on March 8, 2000, and it entered into force on December 15, 2000.[8]According to this legislation, a digital signature is equal to a hand-written signature. Pursuant to the Act it is also necessary to distinguish between valid and void digital signatures, any signatures given with a void or suspendedcertificateare null and void. The Digital Signature Act has been superseded by the EU-wide eSignature Directive (eIDAS) since 2016.[9]It should also mandate that rest of the EU member nations accept Estonian e-signatures amongst other countries e-signatures. The eSignature Directive also specifies that member nations should use and accept signatures in theAssociated Signature Containers (ASiC)format. All Estonian authorities are obliged to accept digitally signed documents. Users can create digitally signed documents with theirID-card,digital identity cardorMobile-IDusing either the DigiDoc3 program that is installed into the computer along with the ID-card software, in the signing section of the State Portal www.eesti.ee or in the DigiDoc Portal. Digital signature support can be added to all the applications and programs where it is required. The Estonian digital signatures corresponds to the EUeIDAS(910/2014) with the strictest requirements (advanced electronic signature, secure-signature-creation device, qualified certificate,certification service provider(CSP) issuing qualified certificates).[10] Upon the issuance of ID-cards or mobile ID-s, every user receives two certificates: one for authentication, the other fordigital signing. The certificate may be compared to the specimen signature of a person – it is public and it can be used by anyone to examine whether the signature given by the person is authentic. The certificate also holds the personal data, name and personal identification code.[11] All certificates are different and correspond to theprivate keysof specific persons. The certificate can be used to examine digital signatures – if the certificate and the signature match mathematically (all the necessary calculations are performed by the computer on behalf of the user), it can be claimed that the signature has been given by the person named in the certificate.
https://en.wikipedia.org/wiki/Digital_signature_in_Estonia
Anelectronic lab notebookorelectronic laboratory notebook(ELN) is acomputer programdesigned to replace paperlaboratory notebooks. Lab notebooks in general are used byscientists,engineers, andtechniciansto documentresearch,experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be alegal documentand may be used in acourt of lawasevidence. Similar to aninventor's notebook, the lab notebook is also often referred to inpatentprosecution andintellectual propertylitigation. Electronic lab notebooks offer many benefits to the user as well as organizations; they are easier tosearchupon, simplifydata copyingandbackups, and supportcollaborationamongst many users.[1]ELNs can have fine-grained access controls, and can be more secure than their paper counterparts.[2]They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.[3] ELNs can be divided into two categories: Solutions range from specialized programs designed from the ground up for use as an ELN, to modifications or direct use of more general programs. Examples of using more general software as an ELN include usingOpenWetWare, aMediaWikiinstall (running the same software that Wikipedia uses),WordPress,[4]or the use of general note taking software such as OneNote as an ELN.[5][3] ELN's come in many different forms. They can be standalone programs, use a client-server model, or be entirely web-based. Some use a lab-notebook approach, others resemble a blog. ELNs are embracing artificial intelligence and LLM technology to provide scientific AI chat assistants. A good many variations on the "ELN" acronym have appeared.[6]Differences between systems with different names are often subtle, with considerable functional overlap between them. Examples include "ERN" (Electronic Research Notebook), "ERMS" (Electronic Resource (or Research or Records) Management System (or Software) and SDMS (Scientific Data (or Document) Management System (or Software). Ultimately, these types of systems all strive to do the same thing: Capture, record, centralize and protect scientific data in a way that is highly searchable, historically accurate, and legally stringent, and which also promotes secure collaboration, greater efficiency, reduced mistakes and lowered total research costs. A good electronic laboratory notebook should offer a secure environment to protect the integrity of both data and process, whilst also affording the flexibility to adopt new processes or changes to existing processes without recourse to further software development. The package architecture should be a modular design, so as to offer the benefit of minimizing validation costs of any subsequent changes that you may wish to make in the future as your needs change. A good electronic laboratory notebook should be an "out of the box" solution that, as standard, has fully configurable forms to comply with the requirements of regulated analytical groups through to a sophisticated ELN for inclusion of structures, spectra, chromatograms, pictures, text, etc. where a preconfigured form is less appropriate. All data within the system may be stored in a database (e.g. MySQL, MS-SQL, Oracle) and be fully searchable. The system should enable data to be collected, stored and retrieved through any combination of forms or ELN that best meets the requirements of the user. The application should enable secure forms to be generated that accept laboratory data input via PCs and/or laptops / palmtops, and should be directly linked to electronic devices such as laboratory balances, pH meters, etc. Networked or wireless communications should be accommodated for by the package which will allow data to be interrogated, tabulated, checked, approved, stored and archived to comply with the latest regulatory guidance and legislation. A system should also include a scheduling option for routine procedures such as equipment qualification and study related timelines. It should include configurable qualification requirements to automatically verify that instruments have been cleaned and calibrated within a specified time period, that reagents have been quality-checked and have not expired, and that workers are trained and authorized to use the equipment and perform the procedures. The laboratory accreditation criteria found in theISO 17025standard needs to be considered for the protection and computer backup of electronic records. These criteria can be found specifically in clause 4.13.1.4 of the standard.[7] Electronic lab notebooks used for development or research in regulated industries, such as medical devices or pharmaceuticals, are expected to comply with FDA regulations related to software validation. The purpose of the regulations is to ensure the integrity of the entries in terms of time, authorship, and content. Unlike ELNs for patent protection, FDA is not concerned with patent interference proceedings, but is concerned with avoidance of falsification. Typical provisions related to software validation are included in the medical device regulations at 21 CFR 820 (et seq.)[8]andTitle 21 CFR Part 11.[9]Essentially, the requirements are that the software has been designed and implemented to be suitable for its intended purposes. Evidence to show that this is the case is often provided by a Software Requirements Specification (SRS) setting forth the intended uses and the needs that the ELN will meet; one or more testing protocols that, when followed, demonstrate that the ELN meets the requirements of the specification and that the requirements are satisfied under worst-case conditions. Security, audit trails, prevention of unauthorized changes without substantial collusion of otherwise independent personnel (i.e., those having no interest in the content of the ELN such as independent quality unit personnel) and similar tests are fundamental. Finally, one or more reports demonstrating the results of the testing in accordance with the predefined protocols are required prior to release of the ELN software for use. If the reports show that the software failed to satisfy any of the SRS requirements, then corrective and preventive action ("CAPA") must be undertaken and documented. Such CAPA may extend to minor software revisions, or changes in architecture or major revisions. CAPA activities need to be documented as well. Aside from the requirements to follow such steps for regulated industry, such an approach is generally a good practice in terms of development and release of any software to assure its quality and fitness for use. There are standards related to software development and testing that can be applied (see ref.).
https://en.wikipedia.org/wiki/Electronic_lab_notebook
Worldwide, legislation concerning the effect and validity of electronic signatures, including, but not limited to, cryptographicdigital signatures, includes: In the EU, electronic signatures and relatedtrust servicesare regulated by the Regulation (EU) N°910/2014 on electronic identification and trust services for electronic transactions in the internal market (eIDAS Regulation). This regulation was adopted by theCouncil of the European Unionon 23 July 2014. It became effective on 1 July and repealed theElectronic Signatures Directive1999/93/EC. At the same date, any laws of EU member states that were inconsistent with eIDAS were also automatically repealed, replaced or modified. In contract to the aforementioned directive (which allowed the EU member states to interpret it and transpose it to their own law) the eIDAS Regulation is directly effective in all member states. European Union Directive establishing the framework for electronic signatures: For an overview of the New Zealand law refer: -The Laws of New Zealand, Electronic Transactions, paras 16-18; or -Commercial Law, paras 8A.7.1-8A.7.4. (these sources are available on theLexisNexissubscription-only website) Court decisions discussing the effect and validity of digital signatures or digital signature-related legislation: Uruguay laws include both, electronic and digital signatures: Turkey has anElectronic Signature LawTBMM.gov.trsince 2004. This law is stated in European Union Directive 1999/93/EC. Turkey has aGovernment Certificate Authority - Kamu SMfor all government agents for their internal use and three independent certificate authorities all of which are issuing qualified digital signatures.
https://en.wikipedia.org/wiki/Electronic_signatures_and_law
Aadhaar eSignis an onlineelectronic signatureservice inIndiato facilitate anAadhaarholder to digitally sign a document.[1]The signature service is facilitated by authenticating the Aadhaar holder via the Aadhaar-based e-KYC (electronic Know Your Customer) service.[2] To eSign a document, one has to have an Aadhaar card and a mobile number registered with Aadhaar. With these two things, an Indian citizen can sign a document remotely without being physically present. The notification[2]issued byGovernment of Indiain this regard stipulates the following procedure for the e-authentication using Aadhaar e-KYC services. Authentication of an electronic record bye-authenticationtechnique, which shall be done by Organisations and individuals seeking to obtain the eSigning Service can utilize the services of various service providers. There are empanelled service providers with whom organisations can register as an Application Service Prover after submitting the requisite documents, getting UAT access, building the application around the service and going through an IT Audit by an CERT-IN empanelled auditor.[4] However, the process of registering as an Application Service Provider is cumbersome, and requires huge investments of time, money and resources in complying with the regulations and building a suitable application. Most organisations prefer using services of plug-n-play gateway providers who take the responsibility of complying with the regulations, hence simplifying the process for the market.
https://en.wikipedia.org/wiki/ESign_(India)
Incryptography,server-based signaturesaredigital signaturesin which a publicly available server participates in the signature creation process. This is in contrast to conventional digital signatures that are based onpublic-key cryptographyandpublic-key infrastructure. With that, they assume that signers use their personaltrusted computing basesfor generating signatures without any communication with servers. Four different classes of server based signatures have been proposed: 1.Lamport One-Time Signatures.Proposed in 1979 byLeslie Lamport.[1]Lamport one-time signaturesare based oncryptographic hash functions. For signing a message, the signer just sends a list of hash values (outputs of a hash function) to a publishing server and therefore the signature process is very fast, though the size of the signature is many times larger, compared to ordinary public-key signature schemes. 2.On-line/off-line Digital Signatures.First proposed in 1989 byEven,GoldreichandMicali[2][3][4]in order to speed up the signature creation procedure, which is usually much more time-consuming than verification. In case ofRSA, it may be one thousand times slower than verification. On-line/off-line digital signatures are created in two phases. The first phase is performedoff-line, possibly even before the message to be signed is known. The second (message-dependent) phase is performed on-line and involves communication with a server. In the first (off-line) phase, the signer uses a conventional public-key digital signature scheme to sign a public key of the Lamport one-time signature scheme. In the second phase, a message is signed by using the Lamport signature scheme. Some later works[5][6][7][8][9][10][11]have improved the efficiency of the original solution by Even et al. 3.Server-Supported Signatures (SSS).Proposed in 1996 byAsokan,TsudikandWaidner[12][13]in order to delegate the use of time-consuming operations ofasymmetric cryptographyfrom clients (ordinary users) to a server. For ordinary users, the use of asymmetric cryptography is limited to signature verification, i.e. there is no pre-computation phase like in the case of on-line/off-line signatures. The main motivation was the use of low-performance mobile devices for creating digital signatures, considering that such devices could be too slow for creating ordinary public-key digital signatures, such asRSA. Clients usehash chainbasedauthentication[14]to send their messages to a signature server in anauthenticatedway and the server then creates a digital signature by using an ordinary public-keydigital signaturescheme. In SSS, signature servers are not assumed to beTrusted Third Parties(TTPs) because the transcript of the hash chain authentication phase can be used fornon repudiationpurposes. In SSS, servers cannot create signatures in the name of their clients. 4.Delegate Servers (DS).Proposed in 2002 by Perrin, Bruns, Moreh and Olkin[15]in order to reduce the problems and costs related to individualprivate keys. In their solution, clients (ordinary users) delegate their private cryptographic operations to a Delegation Server (DS). Users authenticate to DS and request to sign messages on their behalf by using the server's own private key. The main motivation behind DS was that private keys are difficult for ordinary users to use and easy for attackers to abuse. Private keys are not memorable likepasswordsor derivable from persons likebiometrics, and cannot be entered fromkeyboardslike passwords. Private keys are mostly stored asfilesincomputersor onsmart-cards, that may be stolen by attackers and abuse off-line. In 2003, Buldas and Saarepera[16]proposed a two-level architecture of delegation servers that addresses the trust issue by replacing trust with threshold trust via the use ofthreshold cryptosystems.
https://en.wikipedia.org/wiki/Server-based_signatures
Probabilistic Signature Scheme(PSS) is acryptographicsignature schemedesigned byMihir BellareandPhillip Rogaway.[1] RSA-PSS is an adaptation of their work and is standardized as part ofPKCS#1 v2.1. In general, RSA-PSS should be used as a replacement for RSA-PKCS#1 v1.5. PSS was specifically developed to allow modern methods of security analysis to prove that its security directly relates to that of theRSA problem. There is no such proof for the traditional PKCS#1 v1.5 scheme.
https://en.wikipedia.org/wiki/Probabilistic_signature_scheme
TheCryptographic Message Syntax(CMS) is theIETF's standard forcryptographicallyprotected messages. It can be used by cryptographic schemes and protocols todigitally sign,digest,authenticateorencryptany form of digital data. CMS is based on the syntax ofPKCS #7, which in turn is based on thePrivacy-Enhanced Mailstandard. The newest version of CMS (as of 2024[update]) is specified inRFC5652(but also seeRFC5911for updated ASN.1 modules conforming to ASN.1 2002 andRFC8933andRFC9629for updates to the standard). The architecture of CMS is built aroundcertificate-basedkey management, such as the profile defined by thePKIXworking group. CMS is used as the key cryptographic component of many other cryptographic standards, such asS/MIME,PKCS #12and theRFC3161digital timestampingprotocol. OpenSSLisopen sourcesoftware that can encrypt, decrypt, sign and verify, compress and uncompress CMS documents, using theopenssl-cmscommand. Cryptographic Message Syntax (CMS) is regularly updated to address evolving security needs and emerging cryptographic algorithms. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cryptographic_Message_Syntax
Forward anonymityis a property of acryptographicsystem which prevents anattackerwho has recorded past encrypted communications from discovering its contents and participants in the future. This property is analogous toforward secrecy. An example of a system which uses forwardanonymityis apublic key cryptographysystem, where the public key is well-known and used toencrypta message, and an unknown private key is used to decrypt it. In this system, one of the keys is always said to be compromised, but messages and their participants are still unknown by anyone without the corresponding private key. In contrast, an example of a system which satisfies the perfect forwardsecrecyproperty is one in which a compromise of onekeyby an attacker (and consequentdecryptionof messages encrypted with that key) does not undermine the security of previously used keys. Forward secrecy does not refer to protecting the content of the message, but rather to the protection of keys used to decrypt messages. Originally introduced byWhitfield Diffie,Paul van Oorschot, and Michael James Wiener to describe a property of STS (station-to-station protocol) involving a long term secret, either a private key or a shared password.[1] Public Key Cryptography is a common form of a forward anonymous system. It is used to pass encrypted messages, preventing any information about the message from being discovered if the message is intercepted by an attacker. It uses two keys, a public key and a private key. The public key is published, and is used by anyone to encrypt aplaintextmessage. The Private key is not well known, and is used to decryptcyphertext. Public key cryptography is known as an asymmetric decryption algorithm because of different keys being used to perform opposing functions. Public key cryptography is popular because, while it is computationally easy to create a pair of keys, it is extremely difficult to determine the private key knowing only the public key. Therefore, the public key being well known does not allow messages which are intercepted to be decrypted. This is a forward anonymous system because one compromised key (the public key) does not compromise the anonymity of the system. A variation of the public key cryptography system is aWeb of trust, where each user has both a public and private key. Messages sent are encrypted using the intended recipient's public key, and only this recipient's private key will decrypt the message. They are also signed with the senders private key. This creates added security where it becomes more difficult for an attacker to pretend to be a user, as the lack of a private key signature indicates a non-trusted user. A forward anonymous system does not necessarily mean a wholly secure system. A successfulcryptanalysisof a message or sequence of messages can still decode the information without the use of a private key or long term secret. Forward anonymity, along with other privacy-protecting measures, received a burst of media attention after the leak of classified information byEdward Snowden, beginning in June, 2013, which indicated that theNSAandFBI, through specially crafted backdoors in software and computer systems, were conducting mass surveillance over large parts of the population of both the United States (seeMass surveillance in the United States), Europe, Asia, and other parts of the world. They justified this practice as an aid to catch predatorypedophiles.[2]Opponents to this practice argue that leaving in a back door to law enforcement increases the risk of attackers being able to decrypt information, as well as questioning its legality under theUS Constitution, specifically being a form of illegalSearch and Seizure.[3]
https://en.wikipedia.org/wiki/Forward_anonymity
Elliptic-curve Diffie–Hellman(ECDH) is akey agreementprotocol that allows two parties, each having anelliptic-curvepublic–private key pair, to establish ashared secretover aninsecure channel.[1][2][3]This shared secret may be directly used as a key, or toderive another key. The key, or the derived key, can then be used to encrypt subsequent communications using asymmetric-key cipher. It is a variant of theDiffie–Hellmanprotocol usingelliptic-curve cryptography. The following example illustrates how a shared key is established. SupposeAlicewants to establish a shared key withBob, but the only channel available for them may be eavesdropped by a third party. Initially, thedomain parameters(that is,(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}in the prime case or(m,f(x),a,b,G,n,h){\displaystyle (m,f(x),a,b,G,n,h)}in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private keyd{\displaystyle d}(a randomly selected integer in the interval[1,n−1]{\displaystyle [1,n-1]}) and a public key represented by a pointQ{\displaystyle Q}(whereQ=d⋅G{\displaystyle Q=d\cdot G}, that is, the result ofaddingG{\displaystyle G}to itselfd{\displaystyle d}times). Let Alice's key pair be(dA,QA){\displaystyle (d_{\text{A}},Q_{\text{A}})}and Bob's key pair be(dB,QB){\displaystyle (d_{\text{B}},Q_{\text{B}})}. Each party must know the other party's public key prior to execution of the protocol. Alice computes point(xk,yk)=dA⋅QB{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}. Bob computes point(xk,yk)=dB⋅QA{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}. The shared secret isxk{\displaystyle x_{k}}(thexcoordinate of the point). Most standardized protocols based on ECDH derive a symmetric key fromxk{\displaystyle x_{k}}using some hash-based key derivation function. The shared secret calculated by both parties is equal, becausedA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}. The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curvediscrete logarithmproblem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curveDiffie–Hellman problem. The public keys are either static (and trusted, say via a certificate) or ephemeral (also known asECDHE, where final 'E' stands for "ephemeral").Ephemeral keysare temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoidman-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neitherforward secrecynor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a securekey derivation functionto the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, seeMQV. If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. SeveralTLSlibraries were found to be vulnerable to this attack.[4] The shared secret is uniformly distributed on a subset of[0,p){\displaystyle [0,p)}of size(n+1)/2{\displaystyle (n+1)/2}. For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function. LetA,B∈Fp{\displaystyle A,B\in F_{p}}such thatB(A2−4)≠0{\displaystyle B(A^{2}-4)\neq 0}. The Montgomery form elliptic curveEM,A,B{\displaystyle E_{M,A,B}}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfying the equationBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with the point at infinity denoted as∞{\displaystyle \infty }. This is called the affine form of the curve. The set of allFp{\displaystyle F_{p}}-rational points ofEM,A,B{\displaystyle E_{M,A,B}}, denoted asEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfyingBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with∞{\displaystyle \infty }. Under a suitably defined addition operation,EM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is a group with∞{\displaystyle \infty }as the identity element. It is known that the order of this group is a multiple of 4. In fact, it is usually possible to obtainA{\displaystyle A}andB{\displaystyle B}such that the order ofEM,A,B{\displaystyle E_{M,A,B}}is4q{\displaystyle 4q}for a primeq{\displaystyle q}. For more extensive discussions of Montgomery curves and their arithmetic one may follow.[5][6][7] For computational efficiency, it is preferable to work with projective coordinates. The projective form of the Montgomery curveEM,A,B{\displaystyle E_{M,A,B}}isBY2Z=X(X2+AXZ+Z2){\displaystyle BY^{2}Z=X(X^{2}+AXZ+Z^{2})}. For a pointP=[X:Y:Z]{\displaystyle P=[X:Y:Z]}onEM,A,B{\displaystyle E_{M,A,B}}, thex{\displaystyle x}-coordinate mapx{\displaystyle x}is the following:[7]x(P)=[X:Z]{\displaystyle x(P)=[X:Z]}ifZ≠0{\displaystyle Z\neq 0}andx(P)=[1:0]{\displaystyle x(P)=[1:0]}ifP=[0:1:0]{\displaystyle P=[0:1:0]}.Bernstein[8][9]introduced the mapx0{\displaystyle x_{0}}as follows:x0(X:Z)=XZp−2{\displaystyle x_{0}(X:Z)=XZ^{p-2}}which is defined for all values ofX{\displaystyle X}andZ{\displaystyle Z}inFp{\displaystyle F_{p}}. Following Miller,[10]Montgomery[5]and Bernstein,[9]the Diffie-Hellman key agreement can be carried out on a Montgomery curve as follows. LetQ{\displaystyle Q}be a generator of a prime order subgroup ofEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}. Alice chooses a secret keys{\displaystyle s}and has public keyx0(sQ){\displaystyle x_{0}(sQ)}; Bob chooses a secret keyt{\displaystyle t}and has public keyx0(tQ){\displaystyle x_{0}(tQ)}. The shared secret key of Alice and Bob isx0(stQ){\displaystyle x_{0}(stQ)}. Using classical computers, the best known method of obtainingx0(stQ){\displaystyle x_{0}(stQ)}fromQ,x0(sQ){\displaystyle Q,x_{0}(sQ)}andx0(tQ){\displaystyle x_{0}(tQ)}requires aboutO(p1/2){\displaystyle O(p^{1/2})}time using thePollards rho algorithm.[11] The most famous example of Montgomery curve isCurve25519which was introduced by Bernstein.[9]For Curve25519,p=2255−19,A=486662{\displaystyle p=2^{255}-19,A=486662}andB=1{\displaystyle B=1}. The other Montgomery curve which is part of TLS 1.3 isCurve448which was introduced by Hamburg.[12]For Curve448,p=2448−2224−1,A=156326{\displaystyle p=2^{448}-2^{224}-1,A=156326}andB=1{\displaystyle B=1}. Couple of Montgomery curves named M[4698] and M[4058] competitive toCurve25519andCurve448respectively have been proposed in.[13]For M[4698],p=2251−9,A=4698,B=1{\displaystyle p=2^{251}-9,A=4698,B=1}and for M[4058],p=2444−17,A=4058,B=1{\displaystyle p=2^{444}-17,A=4058,B=1}. At 256-bit security level, three Montgomery curves named M[996558], M[952902] and M[1504058] have been proposed in.[14]For M[996558],p=2506−45,A=996558,B=1{\displaystyle p=2^{506}-45,A=996558,B=1}, for M[952902],p=2510−75,A=952902,B=1{\displaystyle p=2^{510}-75,A=952902,B=1}and for M[1504058],p=2521−1,A=1504058,B=1{\displaystyle p=2^{521}-1,A=1504058,B=1}respectively. Apart from these two, other proposals of Montgomery curves can be found at.[15]
https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman
Harvest now, decrypt later[a]is a surveillance strategy that relies on the acquisition and long-term storage of currently unreadable encrypted data awaiting possible breakthroughs indecryptiontechnology that would render it readable in the future – a hypothetical date referred to as Y2Q (a reference toY2K) or Q-Day.[1][2] The most common concern is the prospect of developments inquantum computingwhich would allow currentstrong encryptionalgorithms to be broken at some time in the future, making it possible to decrypt any stored material that had been encrypted using those algorithms.[3]However, the improvement in decryption technology need not be due to a quantum-cryptographic advance; any other form of attack capable of enabling decryption would be sufficient. The existence of this strategy has led to concerns about the need to urgently deploypost-quantum cryptography, even though no practical quantum attacks yet exist, as some data stored now may still remain sensitive even decades into the future.[1][4][5]As of 2022[update], the U.S. federal government has proposed a roadmap for organizations to start migrating toward quantum-cryptography-resistant algorithms to mitigate these threats.[5][6]On January 16, 2025, before the end of his term, Joe Biden issued Executive Order 14144, formally ordering governmental departments to start post-quantum cryptography transitions within a specified timeframe (ranging from 60 to 270 days). Some National Defense departments must complete this transition by January 2, 2030.[7] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later
Thehartley(symbolHart), also called aban, or adit(short for "decimal digit"),[1][2][3]is alogarithmic unitthat measuresinformationorentropy, based on base 10logarithmsand powers of 10. One hartley is the information content of an event if theprobabilityof that event occurring is1⁄10.[4]It is therefore equal to the information contained in onedecimal digit(or dit), assuminga prioriequiprobability of each possible value. It is named afterRalph Hartley. Ifbase 2 logarithmsand powers of 2 are used instead, then the unit of information is theshannonorbit, which is the information content of an event if theprobabilityof that event occurring is1⁄2.Natural logarithmsand powers ofedefine thenat. One ban corresponds to ln(10)nat= log2(10)Sh, or approximately 2.303nat, or 3.322 bit (3.322 Sh).[a]Adecibanis one tenth of a ban (or about 0.332 Sh); the name is formed frombanby theSI prefixdeci-. Though there is no associatedSI unit,information entropyis part of theInternational System of Quantities, defined by International StandardIEC 80000-13of theInternational Electrotechnical Commission. The termhartleyis named afterRalph Hartley, who suggested in 1928 to measure information using a logarithmic base equal to the number of distinguishable states in its representation, which would be the base 10 for a decimal digit.[5][6] Thebanand thedecibanwere invented byAlan TuringwithIrving John "Jack" Goodin 1940, to measure the amount of information that could be deduced by the codebreakers atBletchley Parkusing theBanburismusprocedure, towards determining each day's unknown setting of the German navalEnigmacipher machine. The name was inspired by the enormous sheets of card, printed in the town ofBanburyabout 30 miles away, that were used in the process.[7] Good argued that the sequential summation ofdecibansto build up a measure of the weight of evidence in favour of a hypothesis, is essentiallyBayesian inference.[7]Donald A. Gillies, however, argued thebanis, in effect, the same asKarl Popper'smeasure of the severity of a test.[8] The deciban is a particularly useful unit forlog-odds, notably as a measure of information inBayes factors,odds ratios(ratio of odds, so log is difference of log-odds), or weights of evidence. 10 decibans corresponds to odds of 10:1; 20 decibans to 100:1 odds, etc. According to Good, a change in a weight of evidence of 1 deciban (i.e., a change in the odds from evens to about 5:4) is about as finely as humans can reasonably be expected to quantify their degree of belief in a hypothesis.[9] Odds corresponding to integer decibans can often be well-approximated by simple integer ratios; these are collated below. Value to two decimal places, simple approximation (to within about 5%), with more accurate approximation (to within 1%) if simple one is inaccurate:
https://en.wikipedia.org/wiki/Ban_(unit)
TheCopiale cipheris anencryptedmanuscript consisting of 75,000 handwritten characters filling 105 pages in a bound volume.[1]Undeciphered for more than 260 years, the document was decrypted in 2011 with computer assistance. An international team consisting of Kevin Knight of the University of Southern CaliforniaInformation Sciences InstituteandViterbi School of Engineering, along with Beáta Megyesi and Christiane Schaefer ofUppsala Universityin Sweden, found the cipher to be an encrypted German text. The manuscript is ahomophonic cipherthat uses a complexsubstitution code, including symbols and letters, for its text and spaces.[2] Previously examined by scientists at theGerman Academy of Sciences at Berlinin the 1970s, the cipher was thought to date from between 1760 and 1780.[3]Decipherment revealed that the document had been created in the 1730s by asecret society[1][2][4]called the "high enlightened (Hocherleuchtete) oculist order" ofWolfenbüttel,[5]or Oculists.[6][7]The Oculists used sight as a metaphor for knowledge.[6] The manuscript is in a private collection.[1]A parallel manuscript is kept at theStaatsarchiv Wolfenbüttel.[8] The Copiale cipher includes abstract symbols, as well as letters fromGreekand most of theRomanalphabet. The only plain text in the book is "Copiales 3" at the end and "Philipp 1866" on the flyleaf. Philipp is thought to have been an owner of the manuscript.[8]The plain text letters of the message were found to be encoded by accented Roman letters, Greek letters and symbols, with unaccented Roman letters serving only to represent spaces.[4] The researchers found that the initial 16 pages describe an Oculist initiation ceremony. The manuscript portrays, among other things, an initiation ritual in which the candidate is asked to read a blank piece of paper and, on confessing inability to do so, is given eyeglasses and asked to try again, and then again after washing the eyes with a cloth, followed by an "operation" in which a single eyebrow hair is plucked.[9] The Copiale cipher is asubstitution cipher. It is not a 1-for-1 substitution but rather ahomophonic cipher: each ciphertext character stands for a particular plaintext character, but several ciphertext characters may encode the same plaintext character. For example, all the unaccented Roman characters encode a space. Seven ciphertext characters encode the single letter "e". In addition, some ciphertext characters stand for several characters or even a word. One ciphertext character ("†") encodes "sch", and another encodes the secret society's name.[8][9] One naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say: 'This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode. A machine translation expert, Knight approached language translation as if all languages were ciphers, effectively treating foreign words as symbols for English words. His approach, which tasked anexpectation-maximization algorithmwith generating every possible match of foreign and English words, enabled the algorithm to figure out a few words with each pass. A comparison with 80 languages confirmed that the original language was likely German, which the researchers had guessed based on the word "Philipp," a German spelling. Knight then used a combination of intuition and computing techniques to decipher most of the code in a few weeks. Megyesi later realized that a particular symbol meant "eye", and Schaefer connected that discovery to the Oculists. The Oculists were a group ofophthalmologistsled byCount Friedrich August von Veltheim, who died in April 1775. The Philipp 1866 Copiales 3 document, however, appears to suggest that the Oculists, or at least Count Veltheim, were a group ofFreemasonswho created the Oculist society in order to pass along the Masonic rites[6][10]which had recently beenbannedbyPope Clement XII.
https://en.wikipedia.org/wiki/Copiale_cipher
Adictionary coder, also sometimes known as asubstitution coder, is a class oflossless data compressionalgorithms which operate by searching for matches between the text to be compressed and a set ofstringscontained in adata structure(called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure. Some dictionary coders use a 'static dictionary', one whose full set of strings is determined before coding begins and does not change during the coding process. This approach is most often used when the message or set of messages to be encoded is fixed and large; for instance, anapplicationthat stores the contents of a book in the limited storage space of aPDAgenerally builds a static dictionary from aconcordanceof the text and then uses that dictionary to compress the verses. This scheme of usingHuffman codingto represent indices into a concordance has been called "Huffword".[1] In a related and more general method, a dictionary is built from redundancy extracted from a data environment (various input streams) which dictionary is then used statically to compress a further input stream. For example, a dictionary is built from old English texts then is used to compress a book.[2] More common are methods where the dictionary starts in some predetermined state but the contents change during the encoding process, based on the data that has already been encoded. Both theLZ77andLZ78algorithms work on this principle. In LZ77, acircular buffercalled the "sliding window" holds the lastNbytes of data processed. This window serves as the dictionary, effectively storingeverysubstring that has appeared in the pastNbytes as dictionary entries. Instead of a single index identifying a dictionary entry, two values are needed: thelength, indicating the length of the matched text, and theoffset(also called thedistance), indicating that the match is found in the sliding window startingoffsetbytes before the current text. LZ78 uses a more explicit dictionary structure; at the beginning of the encoding process, the dictionary is empty. An index value of zero is used to represent the end of a string, so the first index of the dictionary is one. At each step of the encoding process, if there is no match, then the last matching index (or zero) and character are both added to the dictionary and output to the compressed stream. If there is a match, then the working index is updated to the matching index, and nothing is output. LZWis similar to LZ78, but, the dictionary is initialized to all possible symbols. The typical implementation works with 8 bit symbols, so the dictionary "codes" for hex 00 to hex FF (decimal 255) are pre-defined. Dictionary entries would be added starting with code value hex 100. Unlike LZ78, if a match is not found (or if the end of data), then only the dictionary code is output. This creates a potential issue since the decoder output is one step behind the dictionary. Refer toLZWfor how this is handled. Enhancements to LZW include handing symbol sizes other than 8 bits and having reserved codes to reset the dictionary and to indicate end of data. Brotliis an example of a commonly-used coder that is initialised with a pre-defined dictionary, but later goes on to use more sophisticated content modelling. The Brotli dictionary consists largely of natural-language words and HTML and JavaScript fragments, based on an analysis of web traffic.[3]
https://en.wikipedia.org/wiki/Dictionary_coder
Leet(or "1337"), also known aseleetorleetspeak, or simplyhacker speech, is a system of modified spellings used primarily on theInternet. It often uses character replacements in ways that play on the similarity of theirglyphsviareflectionor other resemblance. Additionally, it modifies certain words on the basis of a system ofsuffixesand alternative meanings. There are manydialectsorlinguistic varietiesin differentonline communities. The term "leet" is derived from the wordelite, used as an adjective to describe skill or accomplishment, especially in the fields ofonline gamingandcomputer hacking. The leet lexicon includes spellings of the word as1337orleet. Leet originated withinbulletin board systems(BBS) in the 1980s,[1][2]where having "elite" status on a BBS allowed a user access to file folders, games, and special chat rooms. TheCult of the Dead Cowhacker collective has been credited with the original coining of the term, in their text-files of that era.[3]One theory is that it was developed to defeattext filterscreated by BBS orInternet Relay Chatsystem operatorsfor message boards to discourage the discussion of forbidden topics, likecrackingandhacking.[1] Once reserved forhackers, crackers, andscript kiddies, leet later entered the mainstream.[1]Some consideremoticonsandASCII art, like smiley faces, to be leet, while others maintain that leet consists of only symbolic word obfuscation. More obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of obfuscated communication. It is also sometimes used as a scripting language. Variants of leet have been used to evade censorship for many years; for instance "@$$" (ass) and "$#!+" (shit) are frequently seen to make a word appear censored to the untrained eye but obvious to a person familiar with leet. This enables coders and programmers especially to circumvent filters and speak about topics that would usually get banned. "Hacker" would end up as "H4x0r", for example.[4] Leet symbols, especially the number 1337, areInternet memesthat have spilled over into some culture. Signs that show the numbers "1337" are popular motifs for pictures and are shared widely across the Internet.[5] Algospeakshares conceptual similarities with leet, albeit with its primary purpose to circumvent algorithmiccensorship online, "algospeak" deriving fromalgoofalgorithmandspeak. These areeuphemismsthat aim to evadeautomated online moderation techniques, especiallythose that are considered unfairor hinderingfree speech.[6][7][8][9][10]One prominent example is using the term "unalive" as opposed to the verb "kill" or even "suicide". Other examples include using "restarted" or "regarded" instead of "retarded" and "seggs" in place of "sex". These phrases are easily understandable to humans, providing either the same general meaning, pronunciation, or shape of the original word. It is furthermore often employed as a more contemporary alternative to leet. The approach has gained more popularity in 2023 and 2024 due to therise in conflict between Israel and Gazawith the topic's contentious nature on the Internet, especially onMetaandTikTokplatforms.[11][12] One of the hallmarks of leet is its unique approach toorthography, using substitutions of other letters, or indeed of characters other than letters, to represent letters in a word.[13][14]For more casual use of leet, the primary strategy is to use quasi-homoglyphs, symbols that closely resemble (to varying degrees) the letters for which they stand. The choice of symbol is not fixed: anything the reader can make sense of is valid in leet-speak. Sometimes,a gamerwould work around a nickname being already taken (and maybe abandoned as well) by replacing a letter with a similar-looking digit. Another use for leet orthographic substitutions is the creation of paraphrased passwords.[1]Limitations imposed by websites on password length (usually no more than 36) and the characters permitted (e.g. alphanumeric and symbols)[15]require less extensive forms when used in this application. Some examples of leet include: However, leetspeak should not be confused withSMS-speak, characterized by using "4" as "for", "2" as "to", "b&" as "ban'd" (e.g. "banned"), "gr8 b8, m8, appreci8, no h8" as "great bait, mate, appreciate, no hate", and so on. 34 Text rendered in leet is often characterized by distinctive, recurring forms. Leet can be pronounced as a single syllable,/ˈliːt/, rhyming witheat,by way ofapheresisof the initial vowel of "elite". It may also be pronounced as two syllables,/ɛˈliːt/. Likehacker slang, leet enjoys a looser grammar than standard English. The loose grammar, just like loose spelling, encodes some level of emphasis, ironic or otherwise. A reader must rely more on intuitiveparsingof leet to determine the meaning of a sentence rather than the actual sentence structure. In particular, speakers of leet are fond ofverbingnouns, turning verbs into nouns (and back again) as forms of emphasis, e.g. "Austin rocks" is weaker than "Austin roxxorz" (note spelling), which is weaker than "Au5t1N is t3h r0xx0rz" (note grammar), which is weaker than something like "0MFG D00D /\Ü571N 15 T3H l_l83Я 1337 Я0XX0ЯZ" (OMG, dude, Austin is theüber-elite rocks-er!). In essence, all of these mean "Austin rocks," not necessarily the other options. Added words and misspellings add to the speaker's enjoyment. Leet, like hacker slang, employs analogy in construction of new words. For example, ifhaxoredis the past tense of the verb "to hack" (hack → haxor → haxored), thenwinzoredwould be easily understood to be the past tense conjugation of "to win," even if the reader had not seen that particular word before. Leet has its own colloquialisms, many of which originated as jokes based on common typing errors, habits of new computer users, or knowledge ofcybercultureand history.[20]Leet is not solely based upon one language or character set. Greek, Russian, and other languages have leet forms, and leet in one language may use characters from another where they are available. As such, while it may be referred to as a "cipher", a "dialect", or a "language", leet does not fit squarely into any of these categories. The termleetitself is often written31337, or1337, and many other variations. After the meaning of these became widely familiar,10100111001came to be used in its place, because it is thebinaryform of1337decimal, making it more of a puzzle to interpret. An increasingly common characteristic of leet is the changing of grammatical usage so as to be deliberately incorrect. The widespread popularity of deliberate misspelling is similar to the cult following of the "All your base are belong to us" phrase. Indeed, the online and computer communities have been international from their inception, so spellings and phrases typical of non-native speakers are quite common. Many words originally derived from leet have now become part of modernInternet slang, such as "pwned".[1]The original driving forces of new vocabulary in leet were common misspellings and typing errors such as "teh" (generally considered lolspeak), and intentional misspellings,[21]especially the "z" at the end of words ("skillz").[1]Another prominent example of a surviving leet expression isw00t, an exclamation of joy.[2]w00t is sometimes used as abackronymfor "We owned the other team." New words (or corruptions thereof) may arise from a need to make one's username unique. As any given Internet service reaches more people, the number of names available to a given user is drastically reduced. While many users may wish to have the username "CatLover," for example, in many cases it is only possible for one user to have the moniker. As such, degradations of the name may evolve, such as "C@7L0vr." As the leet cipher is highly dynamic, there is a wider possibility for multiple users to share the "same" name, through combinations of spelling and transliterations. Additionally,leet—the word itself—can be found in thescreen-namesandgamertagsof many Internet and video games. Use of the term in such a manner announces a high level of skill, though such an announcement may be seen as baselesshubris.[22][more detail needed] Warez(nominally/wɛərz/) is a plural shortening of "software", typically referring to cracked and redistributed software.[22]Phreakingrefers to the hacking of telephone systems and other non-Internet equipment.[1]Tehoriginated as a typographical error of "the", and is sometimes spelledt3h.[1][23]j00takes the place of "you",[2]originating from theaffricatesound that occurs in place of thepalatal approximant,/j/, whenyoufollows a word ending in analveolarplosiveconsonant, such as/t/or/d/. Also, from German, isüber, which means "over" or "above"; it usually appears as a prefix attached to adjectives, and is frequently written without theumlautover theu.[24] Haxor, and derivations thereof, is leet for "hacker",[25]and it is one of the most commonplace examples of the use of the-xorsuffix.Suxxor(pronounced suck-zor) is a derogatory term which originated inwarezculture and is currently[when?]used in multi-user environments such as multiplayer video games andinstant messaging; it, likehaxor, is one of the early leet words to use the-xorsuffix.Suxxoris a modified version of "sucks" (the phrase "to suck"), and the meaning is the same as the English slang.Suxxorcan be mistaken withSuccer/Succkerif used in the wrong context. Its negative definition essentially makes it the opposite ofroxxor, and both can be used as a verb or a noun. The lettersckare often replaced with the Greek Χ (chi) in other words as well. Within leet, the termn00b(and derivations thereof) is used extensively. The term is derived fromnewbie(as in new and inexperienced, or uninformed),[21][24][26]and is used to differentiate "n00bs" from the "elite" (or even "normal") members of a group. Ownedandpwned(generally pronounced "poned"[27][pʰo͡ʊnd]) both refer to the domination of a player in a video game or argument (rather than just a win), or the successful hacking of a website or computer.[28][29][30][1][24][31]It is a slang term derived from the verbown, meaning to appropriate or to conquer to gain ownership. As is a common characteristic of leet, the terms have also been adapted into noun and adjective forms,[24]ownageandpwnage, which can refer to the situation ofpwningor to the superiority of its subject (e.g., "He is a very good player. He is pwnage."). The term was created accidentally by the misspelling of "own" due to the keyboard proximity of the "O" and "P" keys. It implies domination or humiliation of a rival,[32]used primarily in theInternet-basedvideo game cultureto taunt an opponent who has just been soundly defeated (e.g., "You just got pwned!").[33]In 2015Scrabbleadded pwn to their Official Scrabble Words list.[34] Pr0nisslangforpornography.[1]This is a deliberately inaccurate spelling/pronunciation forporn,[26]where a zero is often used to replace the letter O. It is sometimes used in legitimate communications (such as email discussion groups,Usenet, chat rooms, and Internet web pages) to circumvent language andcontent filters, which may reject messages as offensive orspam. The word also helps preventsearch enginesfrom associating commercial sites with pornography, which might result in unwelcome traffic.[citation needed]Pr0nis also sometimes spelled backwards (n0rp) to further obscure the meaning to potentially uninformed readers. It can also refer toASCII artdepicting pornographic images, or to photos of the internals of consumer and industrial hardware.Prawn, a spoof of the misspelling, has started to come into use, as well; inGrand Theft Auto: Vice City, a pornographer films his movies on "Prawn Island". Conversely, in theRPGKingdom of Loathing,prawn, referring to a kind ofcrustacean, is spelledpr0n, leading to the creation of food items such as "pr0n chow mein". Also seeporm.
https://en.wikipedia.org/wiki/Leet
Incryptography, amusic cipheris analgorithmfor theencryptionof a plaintext into musical symbols or sounds. Music-based ciphers are related to, but not the same asmusical cryptograms. The latter were systems used by composers to create musicalthemesormotifsto represent names based on similarities between letters of the alphabet and musical note names, such as theBACH motif, whereas music ciphers were systems typically used by cryptographers to hide orencodemessages for reasons of secrecy or espionage. There are a variety of different types of music ciphers as distinguished by both the method of encryption and the musical symbols used. Regarding the former, most are simplesubstitution cipherswith a one-to-one correspondence between individual letters of the alphabet and a specific musical note. There are also historical music ciphers that utilize homophonic substitution (one-to-many),polyphonic substitution(many-to-one), compound cipher symbols, and/orcipher keys; all of which can make the enciphered message more difficult to break.[1]Regarding the type of symbol used for substitution, most music ciphers utilize the pitch of amusical noteas the primary cipher symbol. Since there are fewer notes in a standard musical scale (e.g., seven fordiatonic scalesand twelve forchromatic scales) than there are letters of the alphabet, cryptographers would often combine the note name with additional characteristics––such asoctaveregister,rhythmic duration, orclef––to create a complete set of cipher symbols to match every letter. However, there are some music ciphers which rely exclusively on rhythm instead of pitch[2]or on relativescale degreenames instead of absolute pitches.[3][4][5] Music ciphers often have both cryptographic and steganographic elements. Simply put, encryption is scrambling a message so that it is unreadable; steganography is hiding a message so no knows it is even there. Most practitioners of music ciphers believed that encrypting text into musical symbols gave it added security because, if intercepted, most people would not even suspect that the sheet music contained a message. However, asFrancesco Lana de Terzinotes, this is usually not because the resulting cipher melody appears to be a normal piece of music, but rather because so few people know enough about music to realize it is not ("ma gl'intelligenti di musica sono poci").[6]A message can also be visually hidden within a page of music without actually being a music cipher.William F. Friedmanembedded a secret message based on FrancisBacon's cipherinto a sheet music arrangement of Stephen Foster's "My Old Kentucky Home" by visually altering the appearance of thenote stems.[7]Another steganographic strategy is to musically encrypt a plaintext, but hide the message-bearing notes within a larger musical score that requires some visual marker that distinguishes them from the meaningless null-symbol notes (e.g., the cipher melody is only in the tenor line or only the notes with stems pointing down).[8][9] The cipher manuscript from Agostino Amadi there is a musical score in 41v with a pseudo-letter ciphered in it, which is an imaginary letter that Venice writes to Charles V.Italian historian Paolo Preto. "...The emperor sent to prince Gritti, with whom he had been familiar for a long time, a music score that looked like a madrigal....The prince summoned Willaert and the other musicians and asked them to play the melody sent to them by emperor Charles V. When Willaert and the others carefully studied the score, they were unable to play it and confessed they could not understand it."[10] Diatonic music ciphers utilize only the seven basic note names of the diatonic scale:A, B, C, D, E, F, andG. While some systems reuse the same seven pitches for multiple letters (e.g., the pitchAcan represent the lettersA,H,O, orV),[11]most algorithms combine these pitches with other musical attributes to achieve a one-to-one mapping. Perhaps the earliest documented music cipher is found in a manuscript from 1432 called "The Sermon Booklets of Friar Nicholas Philip." Philip's cipher uses only five pitches, but each note can appear with one of four different rhythmic durations, thus providing twenty distinct symbols.[12]A similar cipher appears in a 15th-century British anonymous manuscript[13]as well as in a much later treatise byGiambattista della Porta.[14] In editions of the same treatise (De Furtivis Literarum Notis), Porta also presents a simpler cipher which is much more well-known.Porta's music ciphermaps the lettersAthroughM(omittingJandK) onto a stepwise, ascending, octave-and-a-half scale ofwhole notes(semibreves); with the remainder of the alphabet (omittingVandW) onto a descending scale ofhalf notes(minims).[15]Since alphabetic and scalar sequences are in such close step with each other, this is not a very strong method of encryption, nor are the melodies it produces very natural. Nevertheless, one finds slight variations of this same method employed throughout the 17th and 18th centuries byDaniel Schwenter(1602),[16]John Wilkins(1641),[17]Athanasius Kircher(1650),[18]Kaspar Schott(1655),[19]Philip Thicknesse(1722),[20]and even the British Foreign Office (ca. 1750).[21] Music ciphers based on thechromatic scaleprovide a larger pool of note names to match with letters of the alphabet. Applyingsharpsandflatsto the seven diatonic pitches yields twenty-one unique cipher symbols. Since this is obviously still less than a standard alphabet, chromatic ciphers also require either a reduced letter set or additional features (e.g., octave register or duration). Most chromatic ciphers were developed by composers in the 20th Century when fully chromatic music itself was more common. A notable exception is a cipher attributed to the composerMichael Haydn(brother of the more famousJoseph Haydn).[22]Haydn's algorithm is one of the most comprehensive with symbols for thirty-one letters of theGerman alphabet, punctuations (usingrest signs), parentheses (usingclefs), and word segmentation (usingbar lines). However, because many of the pitches areenharmonic equivalents, this cipher can only be transmitted as visual steganography, not via musical sound. For example, the notesC-sharpandD-flatare spelled differently, but they sound the same on a piano. As such, if one were listening to an enciphered melody, it would not be possible to hear the difference between the lettersKandL. Furthermore, the purpose of this cipher was clearly not to generate musical themes that could pass for normal music. The use of such an extreme chromatic scale produces wildly dissonant,atonalmelodies that would have been obviously atypical for Haydn's time. Although chromatic ciphers did not seemed to be favored by cryptographers, there are several 20th-century composers who developed systems for use in their own music:Arthur Honegger,[23]Maurice Duruflé,[24]Norman Cazden,[25]Olivier Messiaen,[26]andJacques Chailley.[27]Similar to Haydn's cipher, most likewise match the alphabet sequentially onto a chromatic scale and rely on octave register to extend to twenty-six letters. Only Messiaen's appears to have been thoughtfully constructed to meet the composer's aesthetic goals. Although he also utilized different octave registers, the letters of the alphabet are not mapped in scalar order and also have distinct rhythmic values. Messiaen called his musical alphabet thelangage communicable, and used it to embed extra-musical text throughout his organ workMéditations sur le Mystère de la Sainte Trinité. In a compound substitution cipher, each single plaintext letter is replaced by a block of multiple cipher symbols (e.g., 'a' = EN or 'b' = WJU). Similarly, there are compound music ciphers in which each letter is represented by a musical motive with two or more notes. In the case of the former, the compound symbols are to makefrequency analysismore difficult; in the latter, the goal is to make the output more musical. For example, in 1804, Johann Bücking devised a compound cipher which generates musical compositions in the form of aminuetin thekeyof G Major.[28]Each letter of the alphabet is replaced by a measure of music consisting of a stylistically typical motive with three to six notes. After the plaintext is enciphered, additional pre-composed measures are appended to the beginning and end to provide a suitable musical framing. A few years earlier,Wolfgang Amadeus Mozartappears to have employed a similar technique (with much more sophisticated musical motives), although more likely intended as a parlor game than an actual cipher.[29][30]Since the compound symbols are musically meaningful motives, these ciphers could also be considered similar tocodes. Friedrich von Öttingen-Wallerstein proposed a different type of compound music cipher modeled after apolybius square cipher.[31]Öttingen-Wallerstein used a 5x5 grid containing the letters of the alphabet (hidden within the names of angels). Instead of indexing the rows and columns with coordinate numbers, he used thesolfegesyllables Ut, Re, Mi Fa, and Sol (i.e., the first five degrees of a diatonic scale). Each letter, therefore, becomes a two-note melodic motive. This same cipher appears in treatises byGustavus Selenus(1624)[32]and Johann Balthasar Friderici (1665)[33](but without credit to the earlier version of Öttingen-Wallerstein). Because Öttingen-Wallerstein's cipher uses relativescale degrees, rather than fixed note names, it is effectively apolyalphabetic cipher. The same enciphered message could be transposed to a different musical key––with different note names––and still retain the same meaning. The musical key literally becomes a cipher key (orcryptovariable), because the recipient needs that additional information to correctly decipher the melody. Öttingen-Wallerstein insertedrestsas cipherkey markers to indicate when a new musical key was needed to decrypt the message. Francesco Lana de Terziused a more conventional text-string cryptovariable, to add security to a very straightforward 'Porta-style' music cipher (1670).[34]Similar to aVigenère cipher, a single-letter cipher key shifts the position of the plaintext alphabet in relation to the sequence musical cipher symbols; a multi-letter key word shifts the musical scale for each letter of the text in a repeating cycle. A more elaborate cipherkey algorithm was found in an anonymous manuscript in Port-Lesney, France, most likely from the mid-18th century.[35]The so-called'Port-Lesney' music cipheruses a mechanical device known as anAlberti cipher disk[36]There are two rotating disks: the outer disk contains two concentric rings (one withtime signaturesand the other with letters of the alphabet); the inner disk has a ring of compound musical symbols, and a small inner circle with three differentclefsigns. The disks are rotated to align the letters of the alphabet with compound musical symbols to encrypt the message. When the melody is written out on a music staff, the corresponding clef and time signature are added to the beginning to indicate the cipher key (which the recipient aligns on their disk to decipher the message). This particular music cipher was apparently very popular, with a dozen variations (in French, German, and English) appearing throughout the 18th and 19th centuries.[37][38][39][40] The more recent Solfa Cipher[41]combines some of the above cryptovariable techniques. As the name suggests, Solfa Cipher uses relativesolfegedegrees (like Öttingen-Wallerstein) rather than fixed pitches, which allows the same encrypted message to be transposable to different musical keys. Since there are only seven scale degrees, these are combined with a rhythmic component to create enough unique cipher symbols. However, instead of absolute note lengths (e.g., quarter note, half note, etc.) that are employed in most music ciphers, Solfa Cipher uses relativemetricplacement. This type oftonal-metric[42]cipher makes the encrypted melody both harder to break and more musically natural (i.e. similar to common-practice tonal melodies).[43]To decrypt a cipher melody, the recipient needs to know in which musical key and with what rhythmic unit the original message was encrypted, as well as the clef sign and metric location of the first note. The cipher key could also be transmitted as a date by usingSolfalogy, a method of associating each unique date with a tone and modal scale.[44]To further confound interceptors, the transcribed sheet music could be written with a decoy clef, key signature, and time signature. The musical output, however, is a relatively normal, simple, singable tune in comparison to the disjunct, atonal melodies produced by fixed-pitch substitution ciphers.
https://en.wikipedia.org/wiki/Music_cipher
Incryptography,coincidence countingis the technique (invented byWilliam F. Friedman[1]) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a ratio of the total or normalized by dividing by the expected count for a random source model, is known as theindex of coincidence, orICorIOC[2]orIoC[3]for short. Because letters in a natural language are notdistributed evenly, the IC is higher for such texts than it would be for uniformly random text strings. What makes the IC especially useful is the fact that its value does not change if both texts are scrambled by the same single-alphabetsubstitution cipher, allowing a cryptanalyst to quickly detect that form of encryption. The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text. The chance of drawing a given letter in the text is (number of times that letter appears / length of the text). The chance of drawing that same letter again (without replacement) is (appearances − 1 / text length − 1). The product of these two values gives you the chance of drawing that letter twice in a row. One can find this product for each letter that appears in the text, then sum these products to get a chance of drawing two of a kind. This probability can then be normalized by multiplying it by some coefficient, typically 26 in English. wherecis the normalizing coefficient (26 for English),nais the number of times the letter "a" appears in the text, andNis the length of the text. We can express the index of coincidenceICfor a given letter-frequency distribution as a summation: whereNis the length of the text andn1throughncare thefrequencies(as integers) of thecletters of the alphabet (c= 26 for monocaseEnglish). The sum of theniis necessarilyN. The productsn(n− 1)count the number ofcombinationsofnelements taken two at a time. (Actually this counts each pair twice; the extra factors of 2 occur in both numerator and denominator of the formula and thus cancel out.) Each of thenioccurrences of thei-th letter matches each of the remainingni− 1occurrences of the same letter. There are a total ofN(N− 1)letter pairs in the entire text, and 1/cis the probability of a match for each pair, assuming a uniformrandomdistribution of the characters (the "null model"; see below). Thus, this formula gives the ratio of the total number of coincidences observed to the total number of coincidences that one would expect from the null model.[4] The expected average value for the IC can be computed from the relative letter frequenciesfiof the source language: If allcletters of an alphabet were equally probable, the expected index would be 1.0. The actual monographic IC fortelegraphicEnglish text is around 1.73, reflecting the unevenness ofnatural-languageletter distributions. Sometimes values are reported without the normalizing denominator, for example0.067 = 1.73/26for English; such values may be calledκp("kappa-plaintext") rather than IC, withκr("kappa-random") used to denote the denominator1/c(which is the expected coincidence rate for a uniform distribution of the same alphabet,0.0385=1/26for English). English plaintext will generally fall somewhere in the range of 1.5 to 2.0 (normalized calculation).[5] The index of coincidence is useful both in the analysis ofnatural-languageplaintextand in the analysis ofciphertext(cryptanalysis). Even when only ciphertext is available for testing and plaintext letter identities are disguised, coincidences in ciphertext can be caused by coincidences in the underlying plaintext. This technique is used tocryptanalyzetheVigenère cipher, for example. For a repeating-keypolyalphabetic cipherarranged into a matrix, the coincidence rate within each column will usually be highest when the width of the matrix is a multiple of the key length, and this fact can be used to determine the key length, which is the first step in cracking the system. Coincidence counting can help determine when two texts are written in the same language using the samealphabet. (This technique has been used to examine the purportedBible code). Thecausalcoincidence count for such texts will be distinctly higher than theaccidentalcoincidence count for texts in different languages, or texts using different alphabets, or gibberish texts.[citation needed] To see why, imagine an "alphabet" of only the two letters A and B. Suppose that in our "language", the letter A is used 75% of the time, and the letter B is used 25% of the time. If two texts in this language are laid side by side, then the following pairs can be expected: Overall, the probability of a "coincidence" is 62.5% (56.25% for AA + 6.25% for BB). Now consider the case whenbothmessages are encrypted using the simple monoalphabeticsubstitution cipherwhich replaces A with B and vice versa: The overall probability of a coincidence in this situation is 62.5% (6.25% for AA + 56.25% for BB), exactly the same as for the unencrypted "plaintext" case. In effect, the new alphabet produced by the substitution is just a uniform renaming of the original character identities, which does not affect whether they match. Now suppose that onlyonemessage (say, the second) is encrypted using the same substitution cipher (A,B)→(B,A). The following pairs can now be expected: Now the probability of a coincidence is only 37.5% (18.75% for AA + 18.75% for BB). This is noticeably lower than the probability when same-language, same-alphabet texts were used. Evidently, coincidences are more likely when the most frequent letters in each text are the same. The same principle applies to real languages like English, because certain letters, like E, occur much more frequently than other letters—a fact which is used infrequency analysisofsubstitution ciphers. Coincidences involving the letter E, for example, are relatively likely. So when any two English texts are compared, the coincidence count will be higher than when an English text and a foreign-language text are used. This effect can be subtle. For example, similar languages will have a higher coincidence count than dissimilar languages. Also, it is not hard to generate random text with a frequency distribution similar to real text, artificially raising the coincidence count. Nevertheless, this technique can be used effectively to identify when two texts are likely to contain meaningful information in the same language using the same alphabet, to discover periods for repeating keys, and to uncover many other kinds of nonrandom phenomena within or among ciphertexts. Expected values for various languages[6]are: The above description is only an introduction to use of the index of coincidence, which is related to the general concept ofcorrelation. Various forms of Index of Coincidence have been devised; the "delta" I.C. (given by the formula above) in effect measures theautocorrelationof a single distribution, whereas a "kappa" I.C. is used when matching two text strings.[7]Although in some applications constant factors such asc{\displaystyle c}andN{\displaystyle N}can be ignored, in more general situations there is considerable value in trulyindexingeach I.C. against the value to be expected for thenull hypothesis(usually: no match and a uniform random symbol distribution), so that in every situation theexpected valuefor no correlation is 1.0. Thus, any form of I.C. can be expressed as the ratio of the number of coincidences actually observed to the number of coincidences expected (according to the null model), using the particular test setup. From the foregoing, it is easy to see that the formula for kappa I.C. is whereN{\displaystyle N}is the common aligned length of the two textsAandB, and the bracketed term is defined as 1 if thej{\displaystyle j}-th letter of textAmatches thej{\displaystyle j}-th letter of textB, otherwise 0. A related concept, the "bulge" of a distribution, measures the discrepancy between the observed I.C. and the null value of 1.0. The number of cipher alphabets used in apolyalphabetic ciphermay be estimated by dividing the expected bulge of the delta I.C. for a single alphabet by the observed bulge for the message, although in many cases (such as when arepeating keywas used) better techniques are available. As a practical illustration of the use of I.C., suppose that we have intercepted the following ciphertext message: (The grouping into five characters is just atelegraphicconvention and has nothing to do with actual word lengths.) Suspecting this to be an English plaintext encrypted using aVigenère cipherwith normal A–Z components and a short repeating keyword, we can consider the ciphertext "stacked" into some number of columns, for example seven: If the key size happens to have been the same as the assumed number of columns, then all the letters within a single column will have been enciphered using the same key letter, in effect a simpleCaesar cipherapplied to a random selection of English plaintext characters. The corresponding set of ciphertext letters should have a roughness of frequency distribution similar to that of English, although the letter identities have been permuted (shifted by a constant amount corresponding to the key letter). Therefore, if we compute the aggregate delta I.C. for all columns ("delta bar"), it should be around 1.73. On the other hand, if we have incorrectly guessed the key size (number of columns), the aggregate delta I.C. should be around 1.00. So we compute the delta I.C. for assumed key sizes from one to ten: We see that the key size is most likely five. If the actual size is five, we would expect a width of ten to also report a high I.C., since each of its columns also corresponds to a simple Caesar encipherment, and we confirm this. So we should stack the ciphertext into five columns: We can now try to determine the most likely key letter for each column considered separately, by performing trial Caesar decryption of the entire column for each of the 26 possibilities A–Z for the key letter, and choosing the key letter that produces the highest correlation between the decrypted column letter frequencies and the relativeletter frequenciesfor normal English text. That correlation, which we don't need to worry about normalizing, can be readily computed as whereni{\displaystyle n_{i}}are the observed column letter frequencies andfi{\displaystyle f_{i}}are the relative letter frequencies for English. When we try this, the best-fit key letters are reported to be "EVERY," which we recognize as an actual word, and using that for Vigenère decryption produces the plaintext: from which one obtains: after word divisions have been restored at the obvious positions. "XX" are evidently "null" characters used to pad out the final group for transmission. This entire procedure could easily be packaged into an automated algorithm for breaking such ciphers. Due to normal statistical fluctuation, such an algorithm will occasionally make wrong choices, especially when analyzing short ciphertext messages.
https://en.wikipedia.org/wiki/Index_of_coincidence
Zipf's law(/zɪf/;German pronunciation:[tsɪpf]) is anempirical lawstating that when a list of measured values is sorted in decreasing order, the value of then-th entry is often approximatelyinversely proportionalton. The best known instance of Zipf's law applies to thefrequency tableof words in a text orcorpusofnatural language: wordfrequency∝1wordrank.{\displaystyle \ {\mathsf {word\ frequency}}\ \propto \ {\frac {1}{\ {\mathsf {word\ rank}}\ }}~.} It is usually found that the most common word occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in theBrown Corpusof American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852).[2]It is often used in the following form, calledZipf-Mandelbrot law: frequency∝1(rank+b)a{\displaystyle \ {\mathsf {frequency}}\ \propto \ {\frac {1}{\ \left(\ {\mathsf {rank}}+b\ \right)^{a}\ }}\ }wherea{\displaystyle \ a\ }andb{\displaystyle \ b\ }are fitted parameters, witha≈1{\displaystyle \ a\approx 1}, andb≈2.7{\displaystyle \ b\approx 2.7~}.[1] This law is named after the American linguistGeorge Kingsley Zipf,[3][4][5]and is still an important concept inquantitative linguistics. It has been found to apply to many other types of data studied in thephysicalandsocialsciences. Inmathematical statistics, the concept has been formalized as theZipfian distribution: A family of related discreteprobability distributionswhoserank-frequency distributionis an inversepower lawrelation. They are related toBenford's lawand thePareto distribution. Some sets of time-dependent empirical data deviate somewhat from Zipf's law. Such empirical distributions are said to bequasi-Zipfian. In 1913, the German physicistFelix Auerbachobserved an inverse proportionality between the population sizes of cities, and their ranks when sorted by decreasing order of that variable.[6] Zipf's law had been discovered before Zipf,[a]first by the French stenographerJean-Baptiste Estoupin 1916,[8][7]and also byG. Deweyin 1923,[9]and byE. Condonin 1928.[10] The same relation for frequencies of words in natural language texts was observed by George Zipf in 1932,[4]but he never claimed to have originated it. In fact, Zipf did not like mathematics. In his 1932 publication,[11]the author speaks with disdain about mathematical involvement in linguistics,a.o. ibidem, p. 21: The only mathematical expression Zipf used looks likeab2= constant,which he "borrowed" fromAlfred J. Lotka's 1926 publication.[12] The same relationship was found to occur in many other contexts, and for other variables besides frequency.[1]For example, when corporations are ranked by decreasing size, their sizes are found to be inversely proportional to the rank.[13]The same relation is found for personal incomes (where it is calledPareto principle[14]), number of people watching the same TV channel,[15]notesin music,[16]cellstranscriptomes,[17][18]and more. In 1992 bioinformaticianWentian Lipublished a short paper[19]showing that Zipf's law emerges even in randomly generated texts. It included proof that the power law form of Zipf's law was a byproduct of ordering words by rank. Formally, the Zipf distribution onNelements assigns to the element of rankk(counting from 1) the probability: f(k;N)={1HN1k,if1≤k≤N,0,ifk<1orN<k.{\displaystyle \ f(k;N)~=~{\begin{cases}{\frac {1}{\ H_{N}}}\ {\frac {1}{\ k\ }}\ ,&\ {\mbox{ if }}\ 1\leq k\leq N~,\\{}\\~~0~~,&\ {\mbox{ if }}\ k<1\ {\mbox{ or }}\ N<k~.\end{cases}}}whereHNis a normalization constant: TheNthharmonic number: HN≡∑k=1N1k.{\displaystyle H_{N}\equiv \sum _{k=1}^{N}{\frac {\ 1\ }{k}}~.} The distribution is sometimes generalized to an inverse power law with exponentsinstead of1 .[20]Namely, f(k;N,s)=1HN,s1ks{\displaystyle f(k;N,s)={\frac {1}{H_{N,s}}}\,{\frac {1}{k^{s}}}} whereHN,sis ageneralized harmonic number HN,s=∑k=1N1ks.{\displaystyle H_{N,s}=\sum _{k=1}^{N}{\frac {1}{k^{s}}}~.} The generalized Zipf distribution can be extended to infinitely many items (N= ∞) only if the exponentsexceeds1 .In that case, the normalization constantHN,sbecomesRiemann's zeta function, ζ(s)=∑k=1∞1ks<∞.{\displaystyle \zeta (s)=\sum _{k=1}^{\infty }{\frac {1}{k^{s}}}<\infty ~.} The infinite item case is characterized by theZeta distributionand is calledLotka's law. If the exponentsis1or less, the normalization constantHN,sdiverges asNtends to infinity. Empirically, a data set can be tested to see whether Zipf's law applies by checking thegoodness of fitof an empirical distribution to the hypothesized power law distribution with aKolmogorov–Smirnov test, and then comparing the (log) likelihood ratio of the power law distribution to alternative distributions like an exponential distribution or lognormal distribution.[21] Zipf's law can be visualized byplottingthe item frequency data on alog-loggraph, with the axes being thelogarithmof rank order, and logarithm of frequency. The data conform to Zipf's law with exponentsto the extent that the plot approximates alinear(more precisely,affine) function with slope−s. For exponents= 1,one can also plot the reciprocal of the frequency (mean interword interval) against rank, or the reciprocal of rank against frequency, and compare the result with the line through the origin with slope1 .[3] Although Zipf's Law holds for most natural languages, and even certainartificial onessuch asEsperanto[22]andToki Pona,[23]the reason is still not well understood.[24]Recent reviews of generative processes for Zipf's law includeMitzenmacher, "A Brief History of Generative Models for Power Law and Lognormal Distributions",[25]and Simkin, "Re-inventing Willis".[26] However, it may be partly explained by statistical analysis of randomly generated texts. Wentian Li has shown that in a document in which each character has been chosen randomly from a uniform distribution of all letters (plus a space character), the "words" with different lengths follow the macro-trend of Zipf's law (the more probable words are the shortest and have equal probability).[19]In 1959,Vitold Belevitchobserved that if any of a large class of well-behavedstatistical distributions(not only thenormal distribution) is expressed in terms of rank and expanded into aTaylor series, the first-order truncation of the series results in Zipf's law. Further, a second-order truncation of the Taylor series resulted inMandelbrot's law.[27][28] Theprinciple of least effortis another possible explanation: Zipf himself proposed that neither speakers nor hearers using a given language wants to work any harder than necessary to reach understanding, and the process that results in approximately equal distribution of effort leads to the observed Zipf distribution.[5][29] A minimal explanation assumes that words are generated bymonkeys typing randomly. If language is generated by a single monkey typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the words (letter strings separated by white spaces) produced by the monkey follows Zipf's law.[30] Another possible cause for the Zipf distribution is apreferential attachmentprocess, in which the valuexof an item tends to grow at a rate proportional tox(intuitively, "the rich get richer" or "success breeds success"). Such a growth process results in theYule–Simon distribution, which has been shown to fit word frequency versus rank in language[31]and population versus city rank[32]better than Zipf's law. It was originally derived to explain population versus rank in species by Yule, and applied to cities by Simon. A similar explanation is based on atlas models, systems of exchangeable positive-valueddiffusion processeswith drift and variance parameters that depend only on the rank of the process. It has been shown mathematically that Zipf's law holds for Atlas models that satisfy certain natural regularity conditions.[33][34] A generalization of Zipf's law is theZipf–Mandelbrot law, proposed byBenoit Mandelbrot, whose frequencies are: f(k;N,q,s)=1C1(k+q)s.{\displaystyle f(k;N,q,s)={\frac {1}{\ C\ }}\ {\frac {1}{\ \left(k+q\right)^{s}}}~.}[clarification needed] The constantCis theHurwitz zeta functionevaluated ats. Zipfian distributions can be obtained fromPareto distributionsby an exchange of variables.[20] The Zipf distribution is sometimes called thediscrete Pareto distribution[35]because it is analogous to the continuousPareto distributionin the same way that thediscrete uniform distributionis analogous to thecontinuous uniform distribution. The tail frequencies of theYule–Simon distributionare approximately f(k;ρ)≈[constant]k(ρ+1){\displaystyle f(k;\rho )\approx {\frac {\ [{\mathsf {constant}}]\ }{k^{(\rho +1)}}}}for any choice ofρ> 0. In theparabolic fractal distribution, the logarithm of the frequency is a quadratic polynomial of the logarithm of the rank. This can markedly improve the fit over a simple power-law relationship.[36]Like fractal dimension, it is possible to calculate Zipf dimension, which is a useful parameter in the analysis of texts.[37] It has been argued thatBenford's lawis a special bounded case of Zipf's law,[36]with the connection between these two laws being explained by their both originating from scale invariant functional relations from statistical physics and critical phenomena.[38]The ratios of probabilities in Benford's law are not constant. The leading digits of data satisfying Zipf's law withs = 1,satisfy Benford's law. Following Auerbach's 1913 observation, there has been substantial examination of Zipf's law for city sizes.[39]However, more recent empirical[40][41]and theoretical[42]studies have challenged the relevance of Zipf's law for cities. In many texts in human languages, word frequencies approximately follow a Zipf distribution with exponentsclose to 1; that is, the most common word occurs aboutntimes then-th most common one. The actual rank-frequency plot of a natural language text deviates in some extent from the ideal Zipf distribution, especially at the two ends of the range. The deviations may depend on the language, on the topic of the text, on the author, on whether the text was translated from another language, and on the spelling rules used.[citation needed]Some deviation is inevitable because ofsampling error. At the low-frequency end, where the rank approachesN, the plot takes a staircase shape, because each word can occur only an integer number of times. In someRomance languages, the frequencies of the dozen or so most frequent words deviate significantly from the ideal Zipf distribution, because of those words include articles inflected forgrammatical genderandnumber.[citation needed] In many East Asian languages, such asChinese,Tibetan, andVietnamese, eachmorpheme(word or word piece) consists of a singlesyllable; a word of English being often translated to a compound of two such syllables. The rank-frequency table for those morphemes deviates significantly from the ideal Zipf law, at both ends of the range.[citation needed] Even in English, the deviations from the ideal Zipf's law become more apparent as one examines large collections of texts. Analysis of a corpus of 30,000 English texts showed that only about 15% of the texts in it have a good fit to Zipf's law. Slight changes in the definition of Zipf's law can increase this percentage up to close to 50%.[45] In these cases, the observed frequency-rank relation can be modeled more accurately as by separate Zipf–Mandelbrot laws distributions for different subsets or subtypes of words. This is the case for the frequency-rank plot of the first 10 million words of the English Wikipedia. In particular, the frequencies of the closed class offunction wordsin English is better described withslower than 1, while open-ended vocabulary growth with document size and corpus size requiresgreater than 1 for convergence of theGeneralized Harmonic Series.[3] When a text is encrypted in such a way that every occurrence of each distinct plaintext word is always mapped to the same encrypted word (as in the case of simplesubstitution ciphers, like theCaesar ciphers, or simplecodebookciphers), the frequency-rank distribution is not affected. On the other hand, if separate occurrences of the same word may be mapped to two or more different words (as happens with theVigenère cipher), the Zipf distribution will typically have a flat part at the high-frequency end.[citation needed] Zipf's law has been used for extraction of parallel fragments of texts out of comparable corpora.[46]Laurance Doyleand others have suggested the application of Zipf's law for detection ofalien languagein thesearch for extraterrestrial intelligence.[47][48] The frequency-rank word distribution is often characteristic of the author and changes little over time. This feature has been used in the analysis of texts for authorship attribution.[49][50] The word-like sign groups of the 15th-century codexVoynich Manuscripthave been found to satisfy Zipf's law, suggesting that text is most likely not a hoax but rather written in an obscure language or cipher.[51][52] Recent analysis ofwhale vocalizationsamples shows they contain recurring phonemes whose distribution appears to closely obey Zipf's Law.[53]While this isn't proof that whale communication is a natural language, it is an intriguing discovery.
https://en.wikipedia.org/wiki/Zipf%27s_law
A Void, translated from the original FrenchLa Disparition(lit."The Disappearance"), is a 300-page Frenchlipogrammaticnovel, written in 1969 byGeorges Perec, entirely without using the lettere, followingOulipoconstraints. Perec would go on to write with the inverse constraint inLes Revenentes, with only the vowel “e” present in the work.Ian Monkwould later translateLes Revenentesinto English under the titleThe Exeter Text. It was translated into English byGilbert Adair, with the titleA Void, for which he won theScott Moncrieff Prizein 1995.[1]The Adair translation of the book also won the 1996Firecracker Alternative Book Awardfor Fiction.[2] Three other English translations are titledA VanishingbyIan Monk,[3]Vanish'd!by John Lee,[4]andOmissionsby Julian West.[5] All translators have imposed upon themselves a similar lipogrammatic constraint to the original, avoiding the most commonly used letter of thealphabet. This precludes the use of words normally considered essential such asje("I"),et("and"), andle(masculine "the") in French, as well as "me", "be", and "the" in English. The Spanish version contains noa, which is the second most commonly used letter in the Spanish language (first beinge), while the Russian version contains noо. The Japanese version does not use syllables containing the sound "i" (い,き,し, etc.) at all. A Void'splot follows a group of individuals looking for a missing companion, Anton Vowl. It is in part aparodyofnoirandhorror fiction, with many stylistic tricks, gags, plot twists, and a grim conclusion. On many occasions it implicitly talks about its own lipogrammatic limitation, highlighting its unusual syntax.A Void'sprotagonists finally work out which symbol is missing, but find it a hazardous topic to discuss, as any who try to bypass this story's constraint risk fatal injury.Philip Howard, writing a lipogrammatic appraisal ofA Voidin his columnLost Words, said: "This is a story chock-full of plots and sub-plots, of loops within loops, of trails in pursuit of trails, all of which allow its author an opportunity to display his customary virtuosity as an avant-gardist magician, acrobat and clown." Both of Georges Perec's parents perished inWorld War II: his father as a soldier and his mother in theHolocaust. He was brought up by his aunt and uncle after surviving the war.Warren Motteinterprets the absence of the letterein the book as a metaphor for Perec's own sense of loss and incompleteness:[6] The absence of a sign is always the sign of an absence, and the absence of theEinA Voidannounces a broader, cannily coded discourse on loss, catastrophe, and mourning. Perec cannot say the wordspère["father"],mère["mother"],parents["parents"],famille["family"] in his novel, nor can he write the nameGeorges Perec. In short, each "void" in the novel is abundantly furnished with meaning, and each points toward the existential void that Perec grappled with throughout his youth and early adulthood. A strange and compelling parable of survival becomes apparent in the novel, too, if one is willing to reflect on the struggles of a Holocaust orphan trying to make sense out of absence, and those of a young writer who has chosen to do without the letter that is the beginning and end ofécriture["writing"].
https://en.wikipedia.org/wiki/A_Void
Georges Perec(French:[ʒɔʁʒpeʁɛk];[1]7 March 1936 – 3 March 1982) was a Frenchnovelist,filmmaker,documentalist, andessayist. He was a member of theOulipogroup. His father died as a soldier early in theSecond World Warand his mother was killed inthe Holocaust. Many of his works deal with absence, loss, and identity, often throughword play.[2] Born in a working-class district of Paris, Perec was the only son of Icek Judko and Cyrla (Schulewicz) Peretz, Polish Jews who had emigrated to France in the 1920s. He was a distant relative of theYiddishwriterIsaac Leib Peretz. Perec's father, who enlisted in the French Army during World War II, died in 1940 from untreated gunfire or shrapnel wounds, and his mother was killed in theHolocaust, probably inAuschwitzsometime after 1943. Perec was taken into the care of his paternal aunt and uncle in 1942, and in 1945, he was formally adopted by them. Perec started writing reviews and essays forLa Nouvelle Revue françaiseandLes Lettres nouvelles, prominent literary publications, while studying history andsociologyat theSorbonne. In 1958/59 Perec served in the French army as aparatrooper(XVIIIe Régiment de Chasseurs Parachutistes); he married Paulette Petras after being discharged. They spent one year (1960/1961) inSfax, Tunisia, where Paulette worked as a teacher; these experiences are reflected inThings: A Story of the Sixties, which is about a young Parisian couple who also spend a year in Sfax. In 1961 Perec began working at the Neurophysiological Research Laboratory in the unit's research library funded by theCNRSand attached to theHôpital Saint-Antoinein Paris as anarchivist, a low-paid position which he retained until 1978. A few reviewers have noted that the daily handling of records and varied data may have influenced his literary style. In any case, Perec's work on the reassessment of the academic journals under subscription was influenced by a talk about the handling of scientific information given byEugene Garfieldin Paris, and he was introduced toMarshall McLuhanbyJean Duvignaud. Perec's other major influence was theOulipo, which he joined in 1967, meetingRaymond Queneau, among others. Perec dedicated his masterpiece,La Vie mode d'emploi(Life: A User's Manual) to Queneau, who died before it was published. Perec began working on a series ofradio playswith his translator Eugen Helmle and the musicianPhilippe Drogoz[de]in the late 60s; less than a decade later, he was making films. His first cinematic work, based on his novelUn Homme qui dort, was co-directedbyBernard Queysanne[fr], and won the feature-filmPrix Jean Vigoin 1974. Perec also createdcrossword-puzzles forLe Pointfrom 1976 on. La Vie mode d'emploi(1978) brought Perec some financial and critical success—it won thePrix Médicis—and allowed him to turn to writing full-time. He was awriter-in-residenceat theUniversity of Queenslandin Australia in 1981, during which time he worked on53 Jours(53 Days), which remained unfinished. Shortly after his return from Australia, his health deteriorated. A heavy smoker, he was diagnosed with lung cancer. He died the following year inIvry-sur-Seineat age 45, four days shy of his 46th birthday; his ashes are held at thecolumbariumof thePère Lachaise Cemetery. Many of Perec's novels and essays abound with experimentalword play, lists and attempts atclassification, and they are usually tinged withmelancholy. Perec's first novelLes Choses(published in English asThings: A Story of the Sixties) (1965) was awarded thePrix Renaudot. Perec's most famous novelLa Vie mode d'emploi(Life: A User's Manual) was published in 1978. Its title page describes it as "novels", in the plural, the reasons for which become apparent on reading.La Vie mode d'emploiis a tapestry of interwoven stories and ideas as well as literary and historical allusions, based on the lives of the inhabitants of a fictitious Parisian apartment block. It was written according to a complex plan of writing constraints and is primarily constructed from several elements, each adding a layer of complexity. The 99 chapters of his 600-page novel move like a knight's tour of a chessboard around the room plan of the building, describing the rooms and stairwell and telling the stories of the inhabitants. At the end, it is revealed that the whole book actually takes place in a single moment, with a final twist that is an example of "cosmic irony". It was translated into English byDavid Bellosin 1987. Perec is noted for hisconstrained writing. His 300-page novelLa disparition(1969) is alipogram, written with natural sentence structure and correct grammar, but using only words that do not contain the letter "e". It has been translated into English byGilbert Adairunder the titleA Void(1994). His novellaLes revenentes(1972) is a complementaryunivocalicpiece in which the letter "e" is the only vowel used. This constraint affects even the title, which would conventionally be speltRevenantes. An English translation byIan Monkwas published in 1996 asThe Exeter Text: Jewels, Secrets, Sexin the collectionThree. It has been remarked byJacques Roubaudthat these two novels draw words from twodisjoint setsof the French language, and that a third novel would be possible, made from the words not used so far (those containing both "e" and a vowel other than "e"). W ou le souvenir d'enfance, (W, or the Memory of Childhood, 1975) is a semi-autobiographical work that is hard to classify. Two alternating narratives make up the volume: The first is a fictional outline of a remote island country called "W", which at first appears to be autopiansociety modelled on theOlympicideal but is gradually exposed as a horrifying,totalitarianprison much like aconcentration camp. The second is a description of Perec's childhood during and after World War II. Both narratives converge towards the end, highlighting the common theme ofthe Holocaust. "Cantatrix sopranica L. Scientific Papers" is a spoof scientific paper detailing experiments on the "yelling reaction" provoked in sopranos by pelting them with rotten tomatoes. All references in the paper are multi-lingualpunsand jokes; e.g., "(Karybb&Szyla, 1973)".[5] David Bellos, who has translated several of Perec's works, wrote an extensive biography of Perec entitledGeorges Perec: A Life in Words, which won theAcadémie Goncourt'sboursefor biography in 1994. The Association Georges Perec has extensive archives on the author in Paris.[6] In 1992 Perec's initially rejected novelGaspard pas mort(Gaspard not dead), believed to be lost, was found by David Bellos amongst papers in the house of Perec's friendAlain Guérin[fr]. The novel was reworked several times and retitledLe Condottière[7]and published in 2012; its English translation by Bellos followed in 2014 asPortrait of a Manafter the1475 painting of that namebyAntonello da Messina.[8]The initial title borrows the name Gaspard from thePaul Verlainepoem "Gaspar Hauser Chante"[2](inspired byKaspar Hauser, from the 1881 collectionSagesse) and characters named "Gaspard" appear in bothW, or the Memory of ChildhoodandLife: A User's Manual, while inMICRO-TRADUCTIONS, 15 variations discrètes sur un poème connuhe creatively re-writes the Verlaine poem fifteen times. Asteroidno. 2817, discovered in 1982, was named after Perec. In 1994, a street in the20th arrondissement of Pariswas named after him,rue Georges-Perec[fr]. TheFrench postal serviceissued a stamp in 2002 in his honour; it was designed byMarc Taraskoffand engraved byPierre Albuisson. For his work, Perec won the Prix Renaudot in 1965, the Prix Jean Vigo in 1974, and the Prix Médicis in 1978. He was featured as aGoogle Doodleon his 80th birthday.[9] The most complete bibliography of Perec's works is Bernard Magné'sTentative d'inventaire pas trop approximatif des écrits de Georges Perec(Toulouse, Presses Universitaires du Mirail, 1993). Biographies Criticism
https://en.wikipedia.org/wiki/Georges_Perec
Gadsbyis a1939novel byErnest Vincent Wright, written without words that contain the letterE, the most common letter in English. A work that deliberately avoids certain letters is known as alipogram. The plot revolves around the dying fictional city of Branton Hills, which is revitalized as a result of the efforts ofprotagonistJohn Gadsby and a youth organizer. Thoughvanity publishedand little noticed in its time, the book has since become a favorite of fans ofconstrained writingand is a sought-after rarity among some book collectors. The first edition carries on title page and cover thesubtitleA Story of Over 50,000 Words Without Using the Letter "E"(with the variant50,000 Word Novel Without the Letter "E"on the dust jacket), sometimes dropped from late reprints. In the introduction to the book (which, not being part of the story, does contain the letter 'e'), Wright says his primary difficulty was avoiding the "-ed" suffix forpast tenseverbs. He made extensive use of verbs that do not take the -ed suffix and constructions with "do" and "did" (for instance "did walk" instead of "walked"). Scarcity of word options also drastically limited discussion involving quantity – Wright could not write about any number between six and thirty – pronouns, and many common words.[1] An article in the linguistic periodicalWord Wayssaid that 250 of the 500 most commonly used words in English were still available to Wright despite the omission of words withe.[2] Wright usesabbreviationson occasion, but only if the full form is similarly lipogrammatic, e.g. "Dr." (Doctor) and "P.S." (postscript) would be allowed but not "Mr." (Mister). Wright also turns famous sayings into lipograms. Instead ofWilliam Congreve's original line, "Musick has charms to soothe a savage breast", Wright writes that music "hath charms to calm a wild bosom."John Keats' "a thing of beauty is a joy forever" becomes "a charming thing is a joy always".[3]In other respects, Wright does not avoid topics which would otherwise require the letter "e"; for example, a detailed description of a horse-drawn fire engine is made without using the words "horse", "fire", or "engine". John Gadsby, 50, is alarmed by the decline of his hometown, Branton Hills, and rallies the city's youth to form an "Organization of Youth" to build civic spirit and improve living standards. Despite some opposition, Gadsby and his youthful army transform Branton Hills from a stagnant town into a bustling, thriving city. Towards the book's conclusion, members of Gadsby's organization receive diplomas honoring of their work. Gadsby becomes mayor and helps grow Branton Hills' population from 2,000 to 60,000. The story starts around 1906 and continues throughWorld War I,Prohibition, and PresidentWarren G. Harding's administration.Gadsbyis divided into two parts: the first, about a quarter of the book's total length, is strictly a history of the city of Branton Hills and John Gadsby's place in it, while the second part of the book fleshes out its main characters. The novel is written from the point of view of an anonymous narrator, who continually complains about his poor writing skills and often usescircumlocution. "Now, naturally, in writing such a story as this, with its conditions as laid down in its Introduction, it is not surprising that an occasional 'rough spot' in composition is found", the narrator says. "So I trust that a critical public will hold constantly in mind that I am voluntarily avoiding words containing that symbol which is, by far, of most common inclusion in writing our Anglo-Saxon as it is, today".[4] The book's opening two paragraphs are:[5] If Youth, throughout all history, had had a champion to stand up for it; to show a doubting world that a child can think; and, possibly, do it practically; you wouldn't constantly run across folks today who claim that "a child don't know anything." A child's brain starts functioning at birth; and has, amongst its many infant convolutions, thousands of dormant atoms, into which God has put a mystic possibility for noticing an adult's act, and figuring out its purport.Up to about its primary school days a child thinks, naturally, only of play. But many a form of play contains disciplinary factors. "You can't do this," or "that puts you out," shows a child that it must think, practically, or fail. Now, if, throughout childhood, a brain has no opposition, it is plain that it will attain a position of "status quo," as with our ordinary animals. Man knows not why a cow, dog or lion was not born with a brain on a par with ours; why such animals cannot add, subtract, or obtain from books and schooling, that paramount position which Man holds today. Wright appears to have worked on the manuscript for several years. Though its official publication date is 1939, references in newspaper humor columns are made to his manuscript of a book without an "e" years earlier. Prior to publication, he occasionally referred to his manuscript asChampion of Youth. In October 1930, while Wright was living nearTampa, Florida, he wrote a letter toThe Evening Independentnewspaper, boasted that he had written a fine lipogrammatic work, and suggested the paper hold a lipogram competition, with $250 for the winner. The paper turned him down.[6] Wright struggled to find a publisher for the book, and eventually used Wetzel Publishing Co., aself-publishing press. A 2007 post on theBookrideblog about rare books says a warehouse holding copies ofGadsbyburned shortly after the book was printed, destroying "most copies of the ill fated novel". The blog post says the book was never reviewed "and only kept alive by the efforts of a few avant garde French intellos and assorted connoisseurs of the odd, weird and zany". The book's scarcity and oddness has seen original copies priced at $4,000[7]to $7,500[8]by book dealers. Wright died the same year of publication, 1939. In 1937, Wright said writing the book was a challenge and the author of an article on his efforts inThe Oshkosh Dailyrecommended composing lipograms forinsomniasufferers.[9]Wright said in his introduction toGadsbythat "this story was written, not through any attempt to attain literary merit, but due to a somewhat balky nature, caused by hearing it so constantly claimed that 'it can't be done'". He said he tied down the "e" key on his typewriter while completing the final manuscript. "This was done so that none of that vowel might slip in, accidentally; and many did try to do so!"[10]And in fact, the 1939 printing by the Wetzel Publishing Co. contains four such slips, the word "the" on pages 51, 103 and 124, and the word "officers" on page 213.[11][12][13][14][non-primary source needed] In her 1943 novelThe Fountainhead,Ayn Randsatirically imagines a "Council of American Writers", who include "...a youth who had written a thousand-page novel without a single letter o..."[15] La Disparition(A Void) is a 1969 lipogrammatic French novel partly inspired byGadsby[16]that likewise omits the letter "e" and is 50,000 words long.[7][better source needed]Its author,Georges Perec, was introduced to Wright's book by a friend of his inOulipo, a multinational constrained-writing group.[17]Perec was aware from Wright's lack of success that publication of such a work "was taking a risk" of finishing up "with nothing [but] aGadsby".[18]As a nod to Wright,La Disparitioncontains a character named "Lord Gadsby V. Wright",[19]a tutor to protagonist Anton Voyl; in addition, a composition attributed to Voyl inLa Disparitionis actually a quotation fromGadsby.[3] Douglas Hofstadter's 1997 bookLe Ton beau de Marotquotes parts ofGadsbyfor illustration.[20] An article in theOshkosh Dailyin 1937 wrote (lipogrammatically) that the manuscript was "amazingly smooth. No halting parts. A continuity of plot and almost classic clarity obtains".[9]The Village Voicewrote a humor column aboutGadsby. AuthorEd Parkjokingly aped Wright's style: "Lipogram aficionados—folks who lash words and (alas!) brains so as to omit particular symbols—did in fact gasp, saying, 'Hold that ringing communication tool for a bit! What about J. Gadsby?'".[3]David Crystal, host ofBBC Radio 4's linguistics programEnglish Now, called it "probably the most ambitious work ever attempted in this genre".[21]Trevor Kitson, writing in New Zealand'sManawatu Standardin 2006, said he was prompted to write a short lipogram after seeing Wright's book. The attempt gave him an appreciation for how difficult Wright's task was, but he was less impressed with the result. "It seems extraordinarilytwee(not that it uses that word, of course) and mostly aboutall-Americankids going to church and getting married" he wrote.[22]
https://en.wikipedia.org/wiki/Gadsby_(novel)
Ernest Vincent Wright(1872 – October 7, 1939)[1]was an American writer known for his bookGadsby, a 50,000-word novel which (except for four unintentional instances) does not use the letterE. The biographical details of his life are unclear. A 2002 article in theVillage VoicebyEd Parksaid he might have been English by birth but was more probably American. The article said he might have served in the navy and that he has been incorrectly called a graduate of MIT. The article says that he attended a vocational high school attached to MIT in 1888 but there is no record that he graduated. Park said rumors that Wright died within hours ofGadsbybeing published are untrue.[2] In October 1930, Wright approached theEvening Independentnewspaper and proposed it sponsor a bluelipogramwriting competition, with $250 for the winner. In the letter, he boasted of the quality ofGadsby. The newspaper declined his offer.[3] A 2007 post on theBookrideblog about rare books says Wright spent five and a half months writingGadsbyon a typewriter with the "e" key tied down. According to the unsigned entry at Bookride, a warehouse holding copies ofGadsbyburned down shortly after the book was printed, destroying "most copies of the ill-fated novel". The blog post says the book was never reviewed "and only kept alive by the efforts of a few avant-garde French intellos and assorted connoisseurs of the odd, weird and zany". The book's scarcity and oddness has seen copies priced at $4,000 by book dealers.[4] Wright completed a draft ofGadsbyin 1936, during a nearly six-month stint at the National Military Home in California. He failed to find a publisher and used aself-publishing pressto bring out the book.[4] Wright previously authored three other books:The Wonderful Fairies of the Sun(1896),The Fairies That Run the World and How They Do It(1903), andThoughts and Reveries of an American Bluejacket(1918). His humorous poem, "When Father Carves the Duck", can be found in some anthologies.[5]
https://en.wikipedia.org/wiki/Ernest_Vincent_Wright
Alipogram(fromAncient Greek:λειπογράμματος,leipográmmatos, "leaving out a letter"[1]is a kind ofconstrained writingorword gameconsisting of writing paragraphs or longer works in which a particular letter or group of letters is avoided.[2][3]Extended Ancient Greek texts avoiding the lettersigmaare the earliest examples of lipograms.[4] Writing a lipogram may be a trivial task when avoiding uncommon letters likeZ,J,Q, orX, but it is much more challenging to avoidcommon letterslikeE,T, orAin the English language, as the author must omit many ordinary words. Grammatically meaningful and smooth-flowing lipograms can be difficult to compose. Identifying lipograms can also be problematic, as there is always the possibility that a given piece of writing in any language may be unintentionally lipogrammatic. For example,Poe's poemThe Ravencontains noZ, but there is no evidence that this was intentional. Apangrammatic lipogramis a text that uses every letter of the alphabet except one. For example, "The quick brown fox jumped over the lazy dog" omits the letterS, whichthe usual pangramincludes by using the wordjumps. Lasus of Hermione, who lived during the second half of the sixth century BCE, is the most ancient author of a lipogram. This makes the lipogram, according toQuintus Curtius Rufus, "the most ancient systematic artifice of Western literature".[5]Lasus did not like thesigma, and excluded it from one of his poems, entitledOde to the Centaurs,of which nothing remains; as well as aHymn to Demeter, of which the first verse remains:[5] Δάματρα μέλπω Κόραν τε Κλυμένοι᾽ ἄλοχονμελιβόαν ὕμνον ἀναγνέωνΑἰολίδ᾽ ἂμ βαρύβρομον ἁρμονίαν Dámatra mélpô Kóran te Klyménoi᾽ álochonmelibóan hýmnon anagnéônAiolíd᾽ ám barýbromon harmonían I chant of Demeter and Kore, Wife of the famed [Hades]Lifting forth a gentle-voiced hymnIn the deep-toned Aeolian mode.[6] The Greek poets from late antiquityNestor of LarandaandTryphiodoruswrote lipogrammatic adaptations of the Homeric poems: Nestor composed anIliad, which was followed by Tryphiodorus'Odyssey.[7]Both Nestor's Iliad and Tryphiodorus' Odyssey were composed of 24 books (like the original Iliad and Odyssey) each book omitting a subsequent letter of theGreek alphabet. Therefore, the first book omitted alpha, the second beta, and so forth.[4] Twelve centuries after Tryphiodorus wrote his lipogrammaticOdyssey, in 1711, the influential London essayist and journalistJoseph Addisoncommented on this work (although it had been lost), arguing that "it must have been amusing to see the most elegant word of the language rejected like "a diamond with a flaw in it" if it was tainted by the proscribed letter".[8] Petrus Riga, a canon of Sainte-Marie de Reims during the 11th century, translated the Bible, and due to its scriptural obscurities called itAurora. Each canto of the translation was followed by a resume in Lipogrammatic verse; the first canto has noA, the second has noB, and so on. There are two hundred and fifty manuscripts of Petrus Riga's Bible still preserved.[9] There is a tradition of German and Italian lipograms excluding the letterRdating from the seventeenth century until modern times. While some authors excluded other letters, it was the exclusion of theRwhich ensured the practice of the lipogram continued into modern times. In German especially, theR, while not the most prevalent letter, has a very important grammatical role, as masculine pronouns, etc. in the nominative case include anR(e.g.er,der,dieser,jener,welcher).[10]For the Italian authors, it seems to be a profound dislike of the letterRwhich prompted them to write lipograms excluding this letter (and often only this letter).[11] There is also a long tradition ofvocalic lipograms, in which a vowel (or vowels) is omitted. This tends to be the most difficult form of the lipogram. This practice was developed mainly in Spain by the Portuguese author Alonso de Alcala y Herrera who published an octavo entitledVarios efectos de amor, en cinco novelas exemplares, y nuevo artificio para escribir prosa y versos sin una de las letras vocales. From Spain, the method moved into France[12]and England.[11] One of the most remarkable examples of a lipogram isErnest Vincent Wright's novelGadsby(1939), which has over 50,000 words but not a single letterE.[13]Wright's self-imposed rule prohibited such common English words astheandhe, plurals ending in-es, past tenses ending in-ed, and even abbreviations likeMr.(since it is short forMister) orRob(forRobert). Yet the narration flows fairly smoothly, and the book was praised by critics for its literary merits.[14][15] Wright was motivated to writeGadsbyby an earlier four-stanzalipogrammatic poem of another author.[16] Even earlier, Spanish playwrightEnrique Jardiel Poncelapublished five short stories between 1926 and 1927, each one omitting a vowel; the best known are "El Chofer Nuevo" ("The new Driver"), without the letterA, and "Un marido sin vocación" ("A Vocationless Husband"), without theE.[17][18] Interest in lipograms was rekindled byGeorges Perec's novelLa Disparition(1969) (openly inspired by Wright'sGadsby) and its English translationA VoidbyGilbert Adair.[13]Both works are missing the letterE, which is the most common letter in French as well as in English. A Spanish translation instead omits the letterA, the second most common letter in that language. Perec subsequently wroteLes revenentes(1972), a novel that uses no vowels except forE. Perec was a member ofOulipo, a group of French authors who adopted a variety of constraints in their work.La Disparitionis, to date, the longest lipogram in existence.[19] Lipograms are sometimes dismissed by academia. "Literary history seems deliberately to ignore writing as practice, as work, as play".[20] In his bookRethinking Writing, Roy Harris notes that without the ability to analyse language, the lipogram would be unable to exist. He argues that "the lipogram would be inconceivable unless there were writing systems based on fixed inventories of graphic units, and unless it were possible to classify written texts on the base of the presence or absence of one of those unitsirrespective of any phonetic value it might have or any function in the script". He then continues on to argue that as the Greeks were able to invent this system of writing as they had a concept of literary notation. Harris then argues that the proof of this knowledge is found in the Greek invention of "a literate game which consists, essentially, in superimposing the structure of a notation on the structure of texts".[21] Apangrammatic lipogramorlipogrammaticpangramuses every letter of the alphabet except one. An example omitting the letterEis:[22] A jovial swain should not complainOf any buxom fairWho mocks his pain and thinks it gainTo quiz his awkward air. A longer example is "Fate of Nassan", an anonymous poem dating from pre-1870, where each stanza is a lipogrammatic pangram using every letter of the alphabet exceptE.[23] Bold Nassan quits his caravan,A hazy mountain grot to scan;Climbs jaggy rocks to find his way,Doth tax his sight, but far doth stray.Not work of man, nor sport of childFinds Nassan on this mazy wild;Lax grow his joints, limbs toil in vain—Poor wight! why didst thou quit that plain?Vainly for succour Nassan calls;Know, Zillah, that thy Nassan falls;But prowling wolf and fox may joyTo quarry on thy Arab boy. Two other pangrammatic lipograms omitting only the letterEare: Now focus your mind vigorously on this paragraph and on all its words. What’s so unusual about it? Don’t just zip through it quickly. Go through it slowly. Tax your brain as much as you can. This is an unusual paragraph. It looks so ordinary and common. You would think that nothing is wrong with it, and, in fact, nothing is. But it is unusual. Can you find it? Just a quick think should do it. It is not taxing. You should find out without any hints. All that you must know to form your solution is right in front of you. I know if you work at it a bit, it will dawn on you. It’s so amazing and so obvious though you can still miss it. The KJV Bible unintentionally contains two lipogrammatic pangrams: Ezra 7:21 lacks onlyJ, and 1 Chronicles 12:40 lacks onlyQ.[24] Another type of lipogram, which omits every instance of a letter from words that would otherwise contain it, as opposed to finding other words that do not contain the letter, was recorded byWillard R. Espyin181 Missing O's,[25]based on C. C. Bombaugh'sunivocalic'Incontrovertible Facts'.[26] N mnk t gd t rb r cg r plt.N fl s grss t blt Sctch cllps ht.Frm Dnjn's tps n rnc rlls.Lgwd, nt Lts, flds prt's bwls.Bx tps, nt bttms, schl-bys flg fr sprt.Trps f ld tsspts, ft, t st, cnsrt.N cl mnsns blw sft n xfrd dns,rthdx, dg-trt, bk-wrm Slmns.Bld strgths f ghsts n hrrr shw.n Lndn shp-frnts n hp-blssms grw.T crcks f gld n dd lks fr fd.n sft clth ftstls n ld fx dth brd.Lng strm-tst slps frlrn, wrk n t prt.Rks d nt rst n spns, nr wd-ccks snrt,N dg n snw-drp r n cltsft rlls,Nr cmmn frg cncct lng prtcls. The above is also a conventional lipogram in omitting the letters A, E, I, and U. American author James Thurber wroteThe W[o]nderful [O](1957), a fairy tale in which villains ban the letter 'O' from the use by the inhabitants of the island of [Oo]r[oo]. The bookElla Minnow PeabyMark Dunn(2001) is described as a "progressively lipogrammatic epistolary fable": the plot of the story deals with a small country that begins to outlaw the use of various letters as the tiles of each letter fall off of a statue. As each letter is outlawed within the story, it is (for the most part) no longer used in the text of the novel. It is not purely lipogrammatic, however, because the outlawed letters do appear in the text proper from time to time (the characters being penalized with banishment for their use) and when the plot requires a search forpangramsentences, all twenty-six letters are obviously in use. Also, late in the text, the author begins using letters serving ashomophonesfor the omitted letters (i.e.,PHin place of anF,Gin place ofC), which may be considered cheating. At the beginning of each chapter, the alphabet appears along with a sentence, "The quick brown fox jumps over the lazy dog". As the letters are removed from the story, the alphabet, and sentence changes. In Rebeccah Giltrow'sTwenty-Six Degrees, each of the twenty-six chapters, narrated by a different character, deliberately excludes one of the twenty-six letters while using the other twenty-five at least once. And each of the twenty-six letters is excluded from one and only one chapter (the first chapter excludesA, the second chapter excludesB, the third chapter excludesC, etc., the last (twenty-sixth) chapter excludesZ).[citation needed] Cipher and Poverty (The Book of Nothing), a book byMike Schertzer(1998), is presented as the writings of "a prisoner whose world had been impoverished to a single utterance ... who can find me here in this silence". The poems that follow use only the vowelsA,E,I, andO, and consonantsC,D,F,H,L,M,N,R,S,T, andW, taken from that utterance. Eunoia, a book written by Canadian author Christian Bök (2001), is lipogrammatic. The title uses every vowel once. Each of the five chapters in this book is a lipogram. The first chapter in this book uses only words containing the vowel "A" and no other vowel. The second chapter uses only words with no vowel but "E", and so on.[28] In December 2009, a collective of crime writers, Criminal Brief, published eight days of articles as a Christmas-themed lipogrammatic exercise.[29] In June 2013, finance authorAlan Coreypublished "The Subversive Job Search",[30]a non-fiction lipogram that omitted the letter "Z".[31] In the ninth episode of the ninth season ofHow I Met Your Mother, "Platonish", Lily and Robin challenge Barney to obtain a girl's phone number without using the letterE. A website called the Found Poetry Review asked each of its readers (as part of a larger series of challenges) to compose a poem avoiding all letters in the title of the newspaper that had already been selected. For example, if the reader was using theNew York Times, then they could not use the lettersE,I,K,M,N,O,R,S,T,W, andY.[32] Grant Maierhofer'sEbb, a novel published in 2023, by Kernpunkt Press, was written entirely without the letter "A". InTurkeythe tradition of "Lebdeğmezatışma" or "Dudak değmez aşık atışması" (literally: twotroubadoursthrowing verses at each other where lips do not touch each other) that is still practiced,[33]a form of instantaneously improvised poetry sung by opposingAshikstaking turns for artfully criticising each other with one verse at a time, usually by each placing a pin between their upper and lower lips so that the improvised song, accompanied by aSaz(played by the ashik himself), consists only of labial lipograms i.e. without words where lips must touch each other, effectively excluding the letters B, F, M, P and V from the text of the improvised songs. The seventh- or eighth-centuryDashakumaracharitabyDaṇḍinincludes a prominent lipogrammatic section at the beginning of the seventh chapter. Mantragupta is called upon to relate his adventures. However, during the previous night of vigorous lovemaking, his lips have been nibbled several times by his beloved; as a result, they are now swollen, making it painful for him to close them. Thus, throughout his narrative, he is compelled to refrain from using anylabial consonants(प,फ,ब,भ,म). In France, J. R. Ronden premièredla Pièce sans A(The Play without A) in 1816.[34]Jacques Aragowrote in 1853 a version of hisVoyage autour du monde(Voyage around the world), but without the lettera.[35]Georges Perecpublished in 1969La Disparition, a novel without the lettere, the most commonly used letter of the alphabet in French. Its published translation into English,A Void, byGilbert Adair, won theScott Moncrieff Prizein 1995.[36] In Sweden, a form of lipogram was developed out of necessity at theLinköping University. Because files were shared and moved between computer platforms where the internal representation of the charactersÅ,Ä,Ö,å,ä, andö(all moderately common vowels) were different, the tradition to write comments in source code without using those characters emerged.[citation needed] Zanzō ni Kuchibeni o(1989) byYasutaka Tsutsuiis a lipogrammatic novel in Japanese. The first chapter is written without the syllableあ, and usable syllables decrease as the story advances. In the last chapter, the last syllable,ん, vanishes and the story is closed. Zero Degree(1991) byCharu Niveditais a lipogrammatic novel inTamil. The entire novel is written without the common wordஒரு(oru, "one", also used as the indefinite article), and there are no punctuation marks in the novel except dots. Later the novel was translated into English.[clarification needed] Russian 18th-century poetGavriil Derzhavinavoided the harshRsound (and the letterРthat represents it) in his poem "The Nightingale" to render the bird's singing. The seventh-century Arab theologianWasil ibn Atagave a sermon without the letterrāʾ(R).[37]However, it was the 19th-century Mufti of Damascus, Mahmud Hamza "al-Hamzawi" (d. 1887), who produced perhaps the most remarkable work of this genre with a completecommentary of the Quran(published in two volumes) without dotted letters in either the introduction or interlinear commentary.[38]This is all the more remarkable becausedotted lettersmake up about half of the Arabic alphabet. InHungarian language, the game "eszperente" is a game where people only speak using words that contain the vowel "e"; as this makes otherwise straightforward communication complicated, a lot of creative thinking is required in describing common terms in roundabout ways. While a lipogram is usually limited to literary works, there are alsochromatic lipograms, works of music that avoid the use of certain notes. Examples avoiding either the second, sixth, and tenth notes, or the third, seventh, and eleventh notes in achromatic scalehave been cited.[39] Areverse lipogram, also known as anantilipo[40]ortransgram[41]is a type ofconstrained writingwhere each word must contain a particular letter in the text. In Spanish, Mexican authorÓscar de la Borbolla, published in 1991Las vocales malditas(the cursed vowels), a compilation of five short stories composed using a single different vowel: "Cantata a Satanás" (Cantata to Satan), "El hereje rebelde" (the rebel heretic), "Mimí sin bikini" (Mimi without a bikini), "Los locos somos otro cosmos" (we the fools are another cosmos), and "Un gurú vudú" (a voodoo guru).
https://en.wikipedia.org/wiki/Lipogram
Anapostolic nuncio(Latin:nuntius apostolicus; also known as apapal nuncioor simply as anuncio) is anecclesiasticaldiplomat, serving as an envoy or a permanent diplomatic representative of theHoly Seeto astateor to an international organization. A nuncio is appointed by and represents the Holy See, and is the head of thediplomatic mission, called anapostolic nunciature, which is the equivalent of anembassy. The Holy See is legally distinct from theVatican Cityor theCatholic Church. In modern times, a nuncio is usually aArchbishop. An apostolic nuncio is generally equivalent in rank to that ofambassadorextraordinary andplenipotentiary, although inCatholic countriesthe nuncio often ranks above ambassadors in diplomatic protocol. A nuncio performs the same functions as an ambassador and has the same diplomatic privileges. Under the 1961Vienna Convention on Diplomatic Relations, to which the Holy See is a party, a nuncio is an ambassador like those from any other country. The Vienna Convention allows the host state to grant seniority of precedence to the nuncio over others of ambassadorial rank accredited to the same country, and may grant thedeanship of that country's diplomatic corpsto the nuncio regardless of seniority.[1]The representative of the Holy See in some situations is called a Delegate or, in the case of the United Nations, Permanent Observer. In the Holy See hierarchy, these usually rank equally to a nuncio, but they do not have formal diplomatic status, though in some countries they have some diplomatic privileges. In addition, the nuncio serves as the liaison between the Holy See and the Church in that particular nation, supervising the diocesan episcopate (usually a national or multinationalconference of bishopswhich has its own chairman, elected by its members). The nuncio has an important role in the selection of bishops. The name "nuncio" derived from the ancientLatinwordnuntius, meaning "envoy" or "messenger". Since such envoys are accredited to theHoly Seeas such and not to theState of Vatican City, the term "nuncio" (versus "ambassador") emphasizes the unique nature of the diplomatic mission.[2]The1983 Code of Canon Lawclaims the "innate right" to send and receive delegates independent from interference of non-ecclesiastical civil power.Canon lawonly recognizesinternational lawlimitations on this right.[2] Article 16 of theVienna Convention on Diplomatic Relationsprovides: In accordance with this article, many states (even not predominantly Catholic ones such as Germany and Switzerland and including the great majority in central and western Europe and in the Americas) give precedence to the nuncio over other diplomatic representatives, according him the position ofDean of the Diplomatic Corpsreserved in other countries for the longest-serving resident ambassador. Holy See representatives called permanent observers are accredited to several international organisations, including offices or agencies of the United Nations, and other organizations either specialized in their mission or regional or both. A permanent observer of the Holy See is always a cleric, often a titular archbishop with the rank of nuncio, but there has been considerable variation between offices and over time.[clarification needed]
https://en.wikipedia.org/wiki/Nuncio
Incorpus linguistics, ahapax legomenon(/ˈhæpəkslɪˈɡɒmɪnɒn/also/ˈhæpæks/or/ˈheɪpæks/;[1][2]pl.hapax legomena; sometimes abbreviated tohapax, pluralhapaxes) is awordor anexpressionthat occurs only once within a context: either in the written record of an entirelanguage, in the works of an author, or in a single text. The term is sometimes incorrectly used to describe a word that occurs in just one of an author's works but more than once in that particular work.Hapax legomenonis atransliterationofGreekἅπαξ λεγόμενον, meaning "said once".[3] The related termsdis legomenon,tris legomenon, andtetrakis legomenonrespectively (/ˈdɪs/,/ˈtrɪs/,/ˈtɛtrəkɪs/) refer to double, triple, or quadruple occurrences, but are far less commonly used. Hapax legomenaare quite common, as predicted byZipf's law,[4]which states that the frequency of any word in acorpusis inversely proportional to its rank in the frequency table. For large corpora, about 40% to 60% of the words arehapax legomena, and another 10% to 15% aredis legomena.[5]Thus, in theBrown Corpusof American English, about half of the 50,000 distinct words arehapax legomenawithin that corpus.[6] Hapax legomenonrefers to the appearance of a word or an expression in a body of text, not to either its origin or its prevalence in speech. It thus differs from anonce word, which may never be recorded, may find currency and may be widely recorded, or may appear several times in the work whichcoinsit, and so on. Hapax legomenain ancient texts are usually difficult to decipher, since it is easier to infer meaning from multiple contexts than from just one. For example, many of the remaining undecipheredMayan glyphsarehapax legomena, and Biblical (particularlyHebrew; see§ Hebrew)hapax legomenasometimes pose problems in translation.Hapax legomenaalso pose challenges innatural language processing.[7] Some scholars considerHapax legomenauseful in determining the authorship of written works.P. N. Harrison, inThe Problem of the Pastoral Epistles(1921)[8]madehapax legomenapopular amongBible scholars, when he argued that there are considerably more of them in the threePastoral Epistlesthan in otherPauline Epistles. He argued that the number ofhapax legomenain a putative author's corpus indicates his or her vocabulary and is characteristic of the author as an individual. Harrison's theory has faded in significance due to a number of problems raised by other scholars. For example, in 1896,W. P. Workmanfound the following numbers ofhapax legomenain eachPauline Epistle: At first glance, the last three totals (for the Pastoral Epistles) are not out of line with the others.[9]To take account of the varying length of the epistles, Workman also calculated the average number ofhapax legomenaper page of theGreek text, which ranged from 3.6 to 13, as summarized in the diagram on the right.[9]Although the Pastoral Epistles have morehapax legomenaper page, Workman found the differences to be moderate in comparison to the variation among other Epistles. This was reinforced when Workman looked at severalplaysbyShakespeare, which showed similar variations (from 3.4 to 10.4 per page of Irving's one-volume edition), as summarized in the second diagram on the right.[9] Apart from author identity, there are several other factors that can explain the number ofhapax legomenain a work:[10] In the particular case of the Pastoral Epistles, all of these variables are quite different from those in the rest of the Pauline corpus, andhapax legomenaare no longer widely accepted as strong indicators of authorship; those who reject Pauline authorship of the Pastorals rely on other arguments.[11] There are also subjective questions over whether two forms amount to "the same word": dog vs. dogs, clue vs. clueless, sign vs. signature; many other gray cases also arise. TheJewish Encyclopediapoints out that, although there are 1,500hapaxesin theHebrew Bible, only about 400 are not obviously related to other attested word forms.[12] A final difficulty with the use ofhapax legomenafor authorship determination is that there is considerable variation among works known to be by a single author, and disparate authors often show similar values. In other words,hapax legomenaare not a reliable indicator. Authorship studies now usually use a wide range of measures to look for patterns rather than relying upon single measurements. In the fields ofcomputational linguisticsandnatural language processing(NLP), esp.corpus linguisticsandmachine-learnedNLP, it is common to disregardhapax legomena(and sometimes other infrequent words), as they are likely to have little value for computational techniques. This disregard has the added benefit of significantly reducing the memory use of an application, since, byZipf's law, many words are hapax legomena.[13] The following are some examples ofhapax legomenain languages orcorpora. In theQurʾān: Classical Chinese and Japanese literature contains manyChinese charactersthat feature only once in the corpus, and their meaning and pronunciation has often been lost. Known in Japanese askogo(孤語), literally "lonely characters", these can be considered a type ofhapax legomenon.[15]For example, theClassic of Poetry(c.1000 BC) uses the character篪exactly once in the verse「伯氏吹塤, 仲氏吹篪」, and it was only through the discovery of a description byGuo Pu(276–324 AD) that the character could be associated with a specific type of ancient flute. It is fairly common for authors to "coin" new words to convey a particular meaning or for the sake of entertainment, without any suggestion that they are "proper" words. For example,P.G. WodehouseandLewis Carrollfrequently coined novel words.Indexy, below, appears to be an example of this. According to classical scholarClyde Pharr, "theIliadhas 1097hapax legomena, while theOdysseyhas 868".[19]Others have defined the term differently, however, and count as few as 303 in theIliadand 191 in theOdyssey.[20] The number of distincthapax legomenain theHebrew Bibleis 1,480 (out of a total of 8,679 distinct words used).[26]: 112However, due to Hebrewroots,suffixesandprefixes, only 400 are "true"hapax legomena.[12]A full list can be seen at theJewish Encyclopediaentry for "Hapax Legomena".[12] Some examples include:
https://en.wikipedia.org/wiki/Hapax_legomenon
Inmathematics, analgebra over a field(often simply called analgebra) is avector spaceequipped with abilinearproduct. Thus, an algebra is analgebraic structureconsisting of asettogether with operations of multiplication and addition andscalar multiplicationby elements of afieldand satisfying the axioms implied by "vector space" and "bilinear".[1] The multiplication operation in an algebra may or may not beassociative, leading to the notions ofassociative algebraswhere associativity of multiplication is assumed, andnon-associative algebras, where associativity is not assumed (but not excluded, either). Given an integern, theringofrealsquare matricesof ordernis an example of an associative algebra over the field ofreal numbersundermatrix additionandmatrix multiplicationsince matrix multiplication is associative. Three-dimensionalEuclidean spacewith multiplication given by thevector cross productis an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying theJacobi identityinstead. An algebra isunitalorunitaryif it has anidentity elementwith respect to the multiplication. The ring of real square matrices of ordernforms a unital algebra since theidentity matrixof ordernis the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a(unital) ringthat is also a vector space. Many authors use the termalgebrato meanassociative algebra, orunital associative algebra, or in some subjects such asalgebraic geometry,unital associative commutative algebra. Replacing the field of scalars by acommutative ringleads to the more general notion of analgebra over a ring. Algebras are not to be confused with vector spaces equipped with abilinear form, likeinner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients. LetKbe afield, and letAbe avector spaceoverKequipped with an additionalbinary operationfromA×AtoA, denoted here by·(that is, ifxandyare any two elements ofA, thenx·yis an element ofAthat is called theproductofxandy). ThenAis analgebraoverKif the following identities hold for all elementsx,y,zinA, and all elements (often calledscalars)aandbinK: These three axioms are another way of saying that the binary operation isbilinear. An algebra overKis sometimes also called aK-algebra, andKis called thebase fieldofA. The binary operation is often referred to asmultiplicationinA. The convention adopted in this article is that multiplication of elements of an algebra is not necessarilyassociative, although some authors use the termalgebrato refer to anassociative algebra. When a binary operation on a vector space iscommutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs. GivenK-algebrasAandB, ahomomorphismofK-algebras orK-algebra homomorphismis aK-linear mapf:A→Bsuch thatf(xy) =f(x)f(y)for allx,yinA. IfAandBare unital, then a homomorphism satisfyingf(1A) = 1Bis said to be a unital homomorphism. The space of allK-algebra homomorphisms betweenAandBis frequently written as AK-algebraisomorphismis abijectiveK-algebra homomorphism. Asubalgebraof an algebra over a fieldKis alinear subspacethat has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subsetLof aK-algebraAis asubalgebraif for everyx,yinLandcinK, we have thatx·y,x+y, andcxare all inL. In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra. Aleft idealof aK-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subsetLof aK-algebraAis a left ideal if for everyxandyinL,zinAandcinK, we have the following three statements. If (3) were replaced withx·zis inL, then this would define aright ideal. Atwo-sided idealis a subset that is both a left and a right ideal. The termidealon its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent toLbeing a linear subspace ofA. It follows from condition (3) that every left or right ideal is a subalgebra. This definition is different from the definition of anideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2). If we have afield extensionF/K, which is to say a bigger fieldFthat containsK, then there is a natural way to construct an algebra overFfrom any algebra overK. It is the same construction one uses to make a vector space over a bigger field, namely the tensor productVF:=V⊗KF{\displaystyle V_{F}:=V\otimes _{K}F}. So ifAis an algebra overK, thenAF{\displaystyle A_{F}}is an algebra overF. Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such ascommutativityorassociativityof the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different. An algebra isunitalorunitaryif it has aunitor identity elementIwithIx=x=xIfor allxin the algebra. An algebra is called azero algebraifuv= 0for allu,vin the algebra,[2]not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative. Aunital zero algebrais thedirect sum⁠K⊕V{\displaystyle K\oplus V}⁠of a field⁠K{\displaystyle K}⁠and a⁠K{\displaystyle K}⁠-vector space⁠V{\displaystyle V}⁠, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra. More precisely, every element of the algebra may be uniquely written as⁠k+v{\displaystyle k+v}⁠with⁠k∈K{\displaystyle k\in K}⁠and⁠v∈V{\displaystyle v\in V}⁠, and the product is the onlybilinear operationsuch that⁠vw=0{\displaystyle vw=0}⁠for every⁠v{\displaystyle v}⁠and⁠w{\displaystyle w}⁠in⁠V{\displaystyle V}⁠. So, if⁠k1,k2∈K{\displaystyle k_{1},k_{2}\in K}⁠and⁠v1,v2∈V{\displaystyle v_{1},v_{2}\in V}⁠, one has(k1+v1)(k2+v2)=k1k2+(k1v2+k2v1).{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).} A classical example of unital zero algebra is the algebra ofdual numbers, the unital zeroR-algebra built from a one dimensional real vector space. This definition extends verbatim to the definition of aunital zero algebraover acommutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module". Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module⁠V{\displaystyle V}⁠correspond exactly to the ideals of⁠K⊕V{\displaystyle K\oplus V}⁠that are contained in⁠V{\displaystyle V}⁠. For example, the theory ofGröbner baseswas introduced byBruno Buchbergerforidealsin a polynomial ringR=K[x1, ...,xn]over a field. The construction of the unital zero algebra over a freeR-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals. Similarly, unital zero algebras allow to deduce straightforwardly theLasker–Noether theoremfor modules (over a commutative ring) from the original Lasker–Noether theorem for ideals. Examples of associative algebras include Anon-associative algebra[3](ordistributive algebra) over a fieldKis aK-vector spaceAequipped with aK-bilinear mapA×A→A{\displaystyle A\times A\rightarrow A}. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative". Examples detailed in the main article include: The definition of an associativeK-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a fieldKis aringAtogether with aring homomorphism whereZ(A) is thecenterofA. Sinceηis a ring homomorphism, then one must have either thatAis thezero ring, or thatηisinjective. This definition is equivalent to that above, with scalar multiplication given by Given two such associative unitalK-algebrasAandB, a unitalK-algebra homomorphismf:A→Bis a ring homomorphism that commutes with the scalar multiplication defined byη, which one may write as for allk∈K{\displaystyle k\in K}anda∈A{\displaystyle a\in A}. In other words, the following diagram commutes: For algebras over a field, the bilinear multiplication fromA×AtoAis completely determined by the multiplication ofbasiselements ofA. Conversely, once a basis forAhas been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator onA, i.e., so the resulting multiplication satisfies the algebra laws. Thus, given the fieldK, any finite-dimensional algebra can be specifiedup toisomorphismby giving itsdimension(sayn), and specifyingn3structure coefficientsci,j,k, which arescalars. These structure coefficients determine the multiplication inAvia the following rule: wheree1,...,enform a basis ofA. Note however that several different sets of structure coefficients can give rise to isomorphic algebras. Inmathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices arecovariantindices, and transform viapullbacks, while upper indices arecontravariant, transforming underpushforwards. Thus, the structure coefficients are often writtenci,jk, and their defining rule is written using theEinstein notationas If you apply this to vectors written inindex notation, then this becomes IfKis only a commutative ring and not a field, then the same process works ifAis afree moduleoverK. If it isn't, then the multiplication is still completely determined by its action on a set that spansA; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism. Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism byEduard Study.[4] There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) anda. According to the definition of an identity element, It remains to specify There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element),aandb. Taking into account the definition of an identity element, it is sufficient to specify The fourth of these algebras is non-commutative, and the others are commutative. In some areas of mathematics, such ascommutative algebra, it is common to consider the more general concept of analgebra over a ring, where acommutative ringRreplaces the fieldK. The only part of the definition that changes is thatAis assumed to be anR-module(instead of aK-vector space). AringAis always an associative algebra over itscenter, and over theintegers. A classical example of an algebra over its center is thesplit-biquaternion algebra, which is isomorphic toH×H{\displaystyle \mathbb {H} \times \mathbb {H} }, the direct product of twoquaternion algebras. The center of that ring isR×R{\displaystyle \mathbb {R} \times \mathbb {R} }, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensionalR{\displaystyle \mathbb {R} }-algebra. In commutative algebra, ifAis acommutative ring, then any unital ring homomorphismR→A{\displaystyle R\to A}defines anR-module structure onA, and this is what is known as theR-algebra structure.[5]So a ring comes with a naturalZ{\displaystyle \mathbb {Z} }-module structure, since one can take the unique homomorphismZ→A{\displaystyle \mathbb {Z} \to A}.[6]On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). SeeField with one elementfor a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
https://en.wikipedia.org/wiki/Algebra_over_a_commutative_ring
Inmathematics, acategorical ringis, roughly, acategoryequipped with addition and multiplication. In other words, a categorical ring is obtained by replacing theunderlying setof aringby a category. For example, given a ringR, letCbe a category whoseobjectsare the elements of thesetRand whosemorphismsare only the identity morphisms. ThenCis a categorical ring. But the point is that one can also consider the situation in which an element ofRcomes with a "nontrivialautomorphism".[1] This line of generalization of a ring eventually leads to the notion of anEn-ring. Thiscategory theory-related article is astub. You can help Wikipedia byexpanding it. Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Categorical_ring
Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Inmathematics, thecategory of rings, denoted byRing, is thecategorywhose objects arerings(with identity) and whosemorphismsarering homomorphisms(that preserve the identity). Like many categories in mathematics, the category of rings islarge, meaning that theclassof all rings isproper. The categoryRingis aconcrete categorymeaning that the objects aresetswith additional structure (addition and multiplication) and the morphisms arefunctionsthat preserve this structure. There is a naturalforgetful functor for the category of rings to thecategory of setswhich sends each ring to its underlying set (thus "forgetting" the operations of addition and multiplication). This functor has aleft adjoint which assigns to each setXthefree ringgenerated byX. One can also view the category of rings as a concrete category overAb(thecategory of abelian groups) or overMon(thecategory of monoids). Specifically, there areforgetful functors which "forget" multiplication and addition, respectively. Both of these functors have left adjoints. The left adjoint ofAis the functor which assigns to everyabelian groupX(thought of as aZ-module) thetensor ringT(X). The left adjoint ofMis the functor which assigns to everymonoidXthe integralmonoid ringZ[X]. The categoryRingis bothcomplete and cocomplete, meaning that all smalllimits and colimitsexist inRing. Like many other algebraic categories, the forgetful functorU:Ring→Setcreates(and preserves) limits andfiltered colimits, but does not preserve eithercoproductsorcoequalizers. The forgetful functors toAbandMonalso create and preserve limits. Examples of limits and colimits inRinginclude: Unlike many categories studied in mathematics, there do not always exist morphisms between pairs of objects inRing. This is a consequence of the fact that ring homomorphisms must preserve the identity. For example, there are no morphisms from thezero ring0to any nonzero ring. A necessary condition for there to be morphisms fromRtoSis that thecharacteristicofSdivide that ofR. Note that even though some of the hom-sets are empty, the categoryRingis stillconnectedsince it has an initial object. Some special classes of morphisms inRinginclude: The category of rings has a number of importantsubcategories. These include thefull subcategoriesofcommutative rings,integral domains,principal ideal domains, andfields. Thecategory of commutative rings, denotedCRing, is the full subcategory ofRingwhose objects are allcommutative rings. This category is one of the central objects of study in the subject ofcommutative algebra. Any ring can be made commutative by taking thequotientby theidealgenerated by all elements of the form (xy−yx). This defines a functorRing→CRingwhich is left adjoint to the inclusion functor, so thatCRingis areflective subcategoryofRing. Thefree commutative ringon a set of generatorsEis thepolynomial ringZ[E] whose variables are taken fromE. This gives a left adjoint functor to the forgetful functor fromCRingtoSet. CRingis limit-closed inRing, which means that limits inCRingare the same as they are inRing. Colimits, however, are generally different. They can be formed by taking the commutative quotient of colimits inRing. The coproduct of two commutative rings is given by thetensor product of rings. Again, the coproduct of two nonzero commutative rings can be zero. Theopposite categoryofCRingisequivalentto thecategory of affine schemes. The equivalence is given by thecontravariant functorSpec which sends a commutative ring to itsspectrum, an affinescheme. Thecategory of fields, denotedField, is the full subcategory ofCRingwhose objects arefields. The category of fields is not nearly as well-behaved as other algebraic categories. In particular, free fields do not exist (i.e. there is no left adjoint to the forgetful functorField→Set). It follows thatFieldisnota reflective subcategory ofCRing. The category of fields is neitherfinitely completenor finitely cocomplete. In particular,Fieldhas neither products nor coproducts. Another curious aspect of the category of fields is that every morphism is amonomorphism. This follows from the fact that the only ideals in a fieldFare thezero idealandFitself. One can then view morphisms inFieldasfield extensions. The category of fields is notconnected. There are no morphisms between fields of differentcharacteristic. The connected components ofFieldare the full subcategories of characteristicp, wherep= 0 or is aprime number. Each such subcategory has aninitial object: theprime fieldof characteristicp(which isQifp= 0, otherwise thefinite fieldFp). There is a natural functor fromRingto thecategory of groups,Grp, which sends each ringRto itsgroup of unitsU(R) and each ring homomorphism to the restriction toU(R). This functor has aleft adjointwhich sends eachgroupGto theintegral group ringZ[G]. Another functor between these categories sends each ringRto the group of units of thematrix ringM2(R) which acts on theprojective line over a ringP(R). Given a commutative ringRone can define the categoryR-Algwhose objects are allR-algebrasand whose morphisms areR-algebra homomorphisms. The category of rings can be considered a special case. Every ring can be considered aZ-algebra in a unique way. Ring homomorphisms are precisely theZ-algebra homomorphisms. The category of rings is, therefore,isomorphicto the categoryZ-Alg.[1]Many statements about the category of rings can be generalized to statements about the category ofR-algebras. For each commutative ringRthere is a functorR-Alg→Ringwhich forgets theR-module structure. This functor has a left adjoint which sends each ringAto thetensor productR⊗ZA, thought of as anR-algebra by settingr·(s⊗a) =rs⊗a. Many authors do not require rings to have a multiplicative identity element and, accordingly, do not require ring homomorphism to preserve the identity (should it exist). This leads to a rather different category. For distinction we call such algebraic structuresrngsand their morphismsrng homomorphisms. The category of all rngs will be denoted byRng. The category of rings,Ring, is anonfullsubcategoryofRng. It is nonfull because there are rng homomorphisms between rings which do not preserve the identity, and are therefore not morphisms inRing. The inclusion functorRing→Rnghas a left adjoint which formally adjoins an identity to any rng. The inclusion functorRing→Rngrespects limits but not colimits. Thezero ringserves as both an initial and terminal object inRng(that is, it is azero object). It follows thatRng, likeGrpbut unlikeRing, haszero morphisms. These are just the rng homomorphisms that map everything to 0. Despite the existence of zero morphisms,Rngis still not apreadditive category. The pointwise sum of two rng homomorphisms is generally not a rng homomorphism. There is a fully faithful functor from the category ofabelian groupstoRngsending an abelian group to the associatedrng of square zero. Free constructionsare less natural inRngthan they are inRing. For example, the free rng generated by a set {x} is the ring of all integral polynomials overxwith no constant term, while the free ring generated by {x} is just thepolynomial ringZ[x].
https://en.wikipedia.org/wiki/Category_of_rings
Ring theoryis the branch ofmathematicsin whichringsare studied: that is, structures supporting both anadditionand amultiplicationoperation. This is a glossary of some terms of the subject. For the items in commutative algebra (the theory of commutative rings), seeGlossary of commutative algebra. For ring-theoretic concepts in the language of modules, see alsoGlossary of module theory. For specific types of algebras, see also:Glossary of field theoryandGlossary of Lie groups and Lie algebras. Since, currently, there is no glossary on not-necessarily-associative algebra structures in general, this glossary includes some concepts that do not need associativity; e.g., a derivation.
https://en.wikipedia.org/wiki/Glossary_of_ring_theory
Inmathematics, there are two different notions of aring of sets, both referring to certainfamilies of sets. Inorder theory, a nonemptyfamily of setsR{\displaystyle {\mathcal {R}}}is called a ring (of sets) if it isclosedunderunionandintersection.[1]That is, the following two statements are true for all setsA{\displaystyle A}andB{\displaystyle B}, Inmeasure theory, a nonempty family of setsR{\displaystyle {\mathcal {R}}}is called a ring (of sets) if it is closed under union andrelative complement(set-theoretic difference).[2]That is, the following two statements are true for all setsA{\displaystyle A}andB{\displaystyle B}, This implies that a ring in the measure-theoretic sense always contains theempty set. Furthermore, for all setsAandB, which shows that a family of sets closed under relative complement is also closed under intersection, so that a ring in the measure-theoretic sense is also a ring in the order-theoretic sense. IfXis any set, then thepower setofX(the family of all subsets ofX) forms a ring of sets in either sense. If(X, ≤)is apartially ordered set, then itsupper sets(the subsets ofXwith the additional property that ifxbelongs to an upper setUandx≤y, thenymust also belong toU) are closed under both intersections and unions. However, in general it will not be closed under differences of sets. Theopen setsandclosed setsof anytopological spaceare closed under both unions and intersections.[1] On the real lineR, the family of sets consisting of the empty set and all finite unions of half-open intervals of the form(a,b], witha,b∈Ris a ring in the measure-theoretic sense. IfTis any transformation defined on a space, then the sets that are mapped into themselves byTare closed under both unions and intersections.[1] If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets.[1] A ring of sets in the order-theoretic sense forms adistributive latticein which the intersection and union operations correspond to the lattice'smeet and joinoperations, respectively. Conversely, every distributive lattice is isomorphic to a ring of sets; in the case offinitedistributive lattices, this isBirkhoff's representation theoremand the sets may be taken as the lower sets of a partially ordered set.[1] A family of sets closed under union and relative complement is also closed undersymmetric differenceand intersection. Conversely, every family of sets closed under both symmetric difference and intersection is also closed under union and relative complement. This is due to the identities Symmetric difference and intersection together give a ring in the measure-theoretic sense the structure of aboolean ring. In the measure-theoretic sense, aσ-ringis a ring closed undercountableunions, and aδ-ringis a ring closed under countable intersections. Explicitly, a σ-ring overX{\displaystyle X}is a setF{\displaystyle {\mathcal {F}}}such that for any sequence{Ak}k=1∞⊆F,{\displaystyle \{A_{k}\}_{k=1}^{\infty }\subseteq {\mathcal {F}},}we have⋃k=1∞Ak∈F.{\textstyle \bigcup _{k=1}^{\infty }A_{k}\in {\mathcal {F}}.} Given a setX,{\displaystyle X,}afield of sets− also called analgebra overX{\displaystyle X}− is a ring that containsX.{\displaystyle X.}This definition entails that an algebra is closed under absolute complementAc=X∖A.{\displaystyle A^{c}=X\setminus A.}Aσ-algebrais an algebra that is also closed under countable unions, or equivalently a σ-ring that containsX.{\displaystyle X.}In fact, byde Morgan's laws, a δ-ring that containsX{\displaystyle X}is necessarily a σ-algebra as well. Fields of sets, and especially σ-algebras, are central to the modern theory ofprobabilityand the definition ofmeasures. Asemiring (of sets)is a family of setsS{\displaystyle {\mathcal {S}}}with the properties Every ring (in the measure theory sense) is a semi-ring. On the other hand,S:={∅,{x},{y}}{\displaystyle {\mathcal {S}}:=\{\emptyset ,\{x\},\{y\}\}}onX={x,y}{\displaystyle X=\{x,y\}}is a semi-ring but not a ring, since it is not closed under unions. Asemialgebra[3]orelementary family[4]is a collectionS{\displaystyle {\mathcal {S}}}of subsets ofX{\displaystyle X}satisfying the semiring properties except with (3) replaced with: This condition is stronger than (3), which can be seen as follows. IfS{\displaystyle {\mathcal {S}}}is a semialgebra andE,F∈S{\displaystyle E,F\in {\mathcal {S}}}, then we can writeFc=F1∪…∪Fn{\displaystyle F^{c}=F_{1}\cup \ldots \cup F_{n}}for disjointFi∈S{\displaystyle F_{i}\in S}. Then:E∖F=E∩Fc=E∩(F1∪…∪Fn)=(E∩F1)∪…∪(E∩Fn){\displaystyle E\setminus F=E\cap F^{c}=E\cap (F_{1}\cup \ldots \cup F_{n})=(E\cap F_{1})\cup \ldots \cup (E\cap F_{n})} and everyE∩Fi∈S{\displaystyle E\cap F_{i}\in S}since it is closed under intersection, and disjoint since they are contained in the disjointFi{\displaystyle F_{i}}'s. Moreover the condition isstrictlystronger: anyS{\displaystyle S}that is both a ring and a semialgebra is an algebra, hence any ring that is not an algebra is also not a semialgebra (e.g. the collection of finite sets on an infinite setX{\displaystyle X}). Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
https://en.wikipedia.org/wiki/Ring_of_sets
Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Inabstract algebra, asemiringis analgebraic structure. Semirings are a generalization ofrings, dropping the requirement that each element must have anadditive inverse. At the same time, semirings are a generalization ofboundeddistributive lattices. The smallest semiring that is not a ring is thetwo-element Boolean algebra, for instance withlogical disjunction∨{\displaystyle \lor }as addition. A motivating example that is neither a ring nor a lattice is the set ofnatural numbersN{\displaystyle \mathbb {N} }(including zero) under ordinary addition and multiplication. Semirings are abundant because a suitable multiplication operation arises as thefunction compositionofendomorphismsover anycommutative monoid. Some authors define semirings without the requirement for there to be a0{\displaystyle 0}or1{\displaystyle 1}. This makes the analogy betweenringandsemiringon the one hand andgroupandsemigroupon the other hand work more smoothly. These authors often userigfor the concept defined here.[1][a]This originated as a joke, suggesting that rigs are rings withoutnegative elements. (Akin to usingrngto mean a ring without a multiplicativeidentity.) The termdioid(for "double monoid") has been used to mean semirings or other structures. It was used by Kuntzmann in 1972 to denote a semiring.[2](It is alternatively sometimes used fornaturally ordered semirings[3]but the term was also used for idempotent subgroups byBaccelliet al. in 1992.[4]) Asemiringis asetR{\displaystyle R}equipped with twobinary operations+{\displaystyle +}and⋅,{\displaystyle \cdot ,}called addition and multiplication, such that:[5][6][7] Further, the following axioms tie to both operations: The symbol⋅{\displaystyle \cdot }is usually omitted from the notation; that is,a⋅b{\displaystyle a\cdot b}is just writtenab.{\displaystyle ab.} Similarly, anorder of operationsis conventional, in which⋅{\displaystyle \cdot }is applied before+{\displaystyle +}. That is,a+b⋅c{\displaystyle a+b\cdot c}denotesa+(b⋅c){\displaystyle a+(b\cdot c)}. For the purpose of disambiguation, one may write0R{\displaystyle 0_{R}}or1R{\displaystyle 1_{R}}to emphasize which structure the units at hand belong to. Ifx∈R{\displaystyle x\in R}is an element of a semiring andn∈N{\displaystyle n\in {\mathbb {N} }}, thenn{\displaystyle n}-times repeated multiplication ofx{\displaystyle x}with itself is denotedxn{\displaystyle x^{n}}, and one similarly writesxn:=x+x+⋯+x{\displaystyle x\,n:=x+x+\cdots +x}for then{\displaystyle n}-times repeated addition. Thezero ringwith underlying set{0}{\displaystyle \{0\}}is a semiring called the trivial semiring. This triviality can be characterized via0=1{\displaystyle 0=1}and so when speaking of nontrivial semirings,0≠1{\displaystyle 0\neq 1}is often silently assumed as if it were an additional axiom. Now given any semiring, there are several ways to define new ones. As noted, the natural numbersN{\displaystyle {\mathbb {N} }}with its arithmetic structure form a semiring. Taking the zero and the image of the successor operation in a semiringR{\displaystyle R}, i.e., the set{x∈R∣x=0R∨∃p.x=p+1R}{\displaystyle \{x\in R\mid x=0_{R}\lor \exists p.x=p+1_{R}\}}together with the inherited operations, is always a sub-semiring ofR{\displaystyle R}. If(M,+){\displaystyle (M,+)}is a commutative monoid, function composition provides the multiplication to form a semiring: The setEnd⁡(M){\displaystyle \operatorname {End} (M)}of endomorphismsM→M{\displaystyle M\to M}forms a semiring where addition is defined from pointwise addition inM{\displaystyle M}. Thezero morphismand the identity are the respective neutral elements. IfM=Rn{\displaystyle M=R^{n}}withR{\displaystyle R}a semiring, we obtain a semiring that can be associated with the squaren×n{\displaystyle n\times n}matricesMn(R){\displaystyle {\mathcal {M}}_{n}(R)}with coefficients inR{\displaystyle R}, thematrix semiringusing ordinaryadditionandmultiplicationrules of matrices. Givenn∈N{\displaystyle n\in {\mathbb {N} }}andR{\displaystyle R}a semiring,Mn(R){\displaystyle {\mathcal {M}}_{n}(R)}is always a semiring also. It is generally non-commutative even ifR{\displaystyle R}was commutative. Dorroh extensions: IfR{\displaystyle R}is a semiring, thenR×N{\displaystyle R\times {\mathbb {N} }}with pointwise addition and multiplication given by⟨x,n⟩∙⟨y,m⟩:=⟨x⋅y+(xm+yn),n⋅m⟩{\displaystyle \langle x,n\rangle \bullet \langle y,m\rangle :=\langle x\cdot y+(x\,m+y\,n),n\cdot m\rangle }defines another semiring with multiplicative unit1R×N:=⟨0R,1N⟩{\displaystyle 1_{R\times {\mathbb {N} }}:=\langle 0_{R},1_{\mathbb {N} }\rangle }. Very similarly, ifN{\displaystyle N}is any sub-semiring ofR{\displaystyle R}, one may also define a semiring onR×N{\displaystyle R\times N}, just by replacing the repeated addition in the formula by multiplication. Indeed, these constructions even work under looser conditions, as the structureR{\displaystyle R}is not actually required to have a multiplicative unit. Zerosumfreesemirings are in a sense furthest away from being rings. Given a semiring, one may adjoin a new zero0′{\displaystyle 0'}to the underlying set and thus obtain such a zerosumfree semiring that also lackszero divisors. In particular, now0⋅0′=0′{\displaystyle 0\cdot 0'=0'}and the old semiring is actually not a sub-semiring. One may then go on and adjoin new elements "on top" one at a time, while always respecting the zero. These two strategies also work under looser conditions. Sometimes the notations−∞{\displaystyle -\infty }resp.+∞{\displaystyle +\infty }are used when performing these constructions. Adjoining a new zero to the trivial semiring, in this way, results in another semiring which may be expressed in terms of thelogical connectivesof disjunction and conjunction:⟨{0,1},+,⋅,⟨0,1⟩⟩=⟨{⊥,⊤},∨,∧,⟨⊥,⊤⟩⟩{\displaystyle \langle \{0,1\},+,\cdot ,\langle 0,1\rangle \rangle =\langle \{\bot ,\top \},\lor ,\land ,\langle \bot ,\top \rangle \rangle }. Consequently, this is the smallest semiring that is not a ring. Explicitly, it violates the ring axioms as⊤∨P=⊤{\displaystyle \top \lor P=\top }for allP{\displaystyle P}, i.e.1{\displaystyle 1}has no additive inverse. In theself-dualdefinition, the fault is with⊥∧P=⊥{\displaystyle \bot \land P=\bot }. (This is not to be conflated with the ringZ2{\displaystyle \mathbb {Z} _{2}}, whose addition functions asxor⊻{\displaystyle \veebar }.) In thevon Neumann model of the naturals,0ω:={}{\displaystyle 0_{\omega }:=\{\}},1ω:={0ω}{\displaystyle 1_{\omega }:=\{0_{\omega }\}}and2ω:={0ω,1ω}=P1ω{\displaystyle 2_{\omega }:=\{0_{\omega },1_{\omega }\}={\mathcal {P}}1_{\omega }}. The two-element semiring may be presented in terms of the set theoretic union and intersection as⟨P1ω,∪,∩,⟨{},1ω⟩⟩{\displaystyle \langle {\mathcal {P}}1_{\omega },\cup ,\cap ,\langle \{\},1_{\omega }\rangle \rangle }. Now this structure in fact still constitutes a semiring when1ω{\displaystyle 1_{\omega }}is replaced by any inhabited set whatsoever. Theidealson a semiringR{\displaystyle R}, with their standard operations on subset, form a lattice-ordered, simple and zerosumfree semiring. The ideals ofMn(R){\displaystyle {\mathcal {M}}_{n}(R)}are in bijection with the ideals ofR{\displaystyle R}. The collection of left ideals ofR{\displaystyle R}(and likewise the right ideals) also have much of that algebraic structure, except that thenR{\displaystyle R}does not function as a two-sided multiplicative identity. IfR{\displaystyle R}is a semiring andA{\displaystyle A}is aninhabited set,A∗{\displaystyle A^{*}}denotes thefree monoidand the formal polynomialsR[A∗]{\displaystyle R[A^{*}]}over its words form another semiring. For small sets, the generating elements are conventionally used to denote the polynomial semiring. For example, in case of a singletonA={X}{\displaystyle A=\{X\}}such thatA∗={ε,X,X2,X3,…}{\displaystyle A^{*}=\{\varepsilon ,X,X^{2},X^{3},\dots \}}, one writesR[X]{\displaystyle R[X]}. Zerosumfree sub-semirings ofR{\displaystyle R}can be used to determine sub-semirings ofR[A∗]{\displaystyle R[A^{*}]}. Given a setA{\displaystyle A}, not necessarily just a singleton, adjoining a default element to the set underlying a semiringR{\displaystyle R}one may define the semiring of partial functions fromA{\displaystyle A}toR{\displaystyle R}. Given aderivationd{\displaystyle {\mathrm {d} }}on a semiringR{\displaystyle R}, another the operation "∙{\displaystyle \bullet }" fulfillingX∙y=y∙X+d(y){\displaystyle X\bullet y=y\bullet X+{\mathrm {d} }(y)}can be defined as part of a new multiplication onR[X]{\displaystyle R[X]}, resulting in another semiring. The above is by no means an exhaustive list of systematic constructions. Derivations on a semiringR{\displaystyle R}are the mapsd:R→R{\displaystyle {\mathrm {d} }\colon R\to R}withd(x+y)=d(x)+d(y){\displaystyle {\mathrm {d} }(x+y)={\mathrm {d} }(x)+{\mathrm {d} }(y)}andd(x⋅y)=d(x)⋅y+x⋅d(y){\displaystyle {\mathrm {d} }(x\cdot y)={\mathrm {d} }(x)\cdot y+x\cdot {\mathrm {d} }(y)}. For example, ifE{\displaystyle E}is the2×2{\displaystyle 2\times 2}unit matrix andU=(0100){\displaystyle U={\bigl (}{\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}{\bigr )}}, then the subset ofM2(R){\displaystyle {\mathcal {M}}_{2}(R)}given by the matricesaE+bU{\displaystyle a\,E+b\,U}witha,b∈R{\displaystyle a,b\in R}is a semiring with derivationaE+bU↦bU{\displaystyle a\,E+b\,U\mapsto b\,U}. A basic property of semirings is that1{\displaystyle 1}is not a left or rightzero divisor, and that1{\displaystyle 1}but also0{\displaystyle 0}squares to itself, i.e. these haveu2=u{\displaystyle u^{2}=u}. Some notable properties are inherited from the monoid structures: The monoid axioms demand unit existence, and so the set underlying a semiring cannot be empty. Also, the2-arypredicatex≤prey{\displaystyle x\leq _{\text{pre}}y}defined as∃d.x+d=y{\displaystyle \exists d.x+d=y}, here defined for the addition operation, always constitutes the rightcanonicalpreorderrelation.Reflexivityy≤prey{\displaystyle y\leq _{\text{pre}}y}is witnessed by the identity. Further,0≤prey{\displaystyle 0\leq _{\text{pre}}y}is always valid, and so zero is theleast elementwith respect to this preorder. Considering it for the commutative addition in particular, the distinction of "right" may be disregarded. In the non-negative integersN{\displaystyle \mathbb {N} }, for example, this relation isanti-symmetricandstrongly connected, and thus in fact a (non-strict)total order. Below, more conditional properties are discussed. Anyfieldis also asemifield, which in turn is a semiring in which also multiplicative inverses exist. Any field is also aring, which in turn is a semiring in which also additive inverses exist. Note that a semiring omits such a requirement, i.e., it requires only acommutative monoid, not acommutative group. The extra requirement for a ring itself already implies the existence of a multiplicative zero. This contrast is also why for the theory of semirings, the multiplicative zero must be specified explicitly. Here−1{\displaystyle -1}, the additive inverse of1{\displaystyle 1}, squares to1{\displaystyle 1}. As additive differencesd=y−x{\displaystyle d=y-x}always exist in a ring,x≤prey{\displaystyle x\leq _{\text{pre}}y}is a trivial binary relation in a ring. A semiring is called acommutativesemiringif also the multiplication is commutative.[8]Its axioms can be stated concisely: It consists of two commutative monoids⟨+,0⟩{\displaystyle \langle +,0\rangle }and⟨⋅,1⟩{\displaystyle \langle \cdot ,1\rangle }on one set such thata⋅0=0{\displaystyle a\cdot 0=0}anda⋅(b+c)=a⋅b+a⋅c{\displaystyle a\cdot (b+c)=a\cdot b+a\cdot c}. Thecenterof a semiring is a sub-semiring and being commutative is equivalent to being its own center. The commutative semiring of natural numbers is theinitial objectamong its kind, meaning there is a unique structure preserving map ofN{\displaystyle {\mathbb {N} }}into any commutative semiring. The bounded distributive lattices arepartially ordered, commutative semirings fulfilling certain algebraic equations relating to distributivity and idempotence. Thus so are theirduals. Notions or order can be defined using strict, non-strict orsecond-orderformulations. Additional properties such as commutativity simplify the axioms. Given astrict total order(also sometimes called linear order, orpseudo-orderin a constructive formulation), then by definition, thepositiveandnegativeelements fulfill0<x{\displaystyle 0<x}resp.x<0{\displaystyle x<0}. By irreflexivity of a strict order, ifs{\displaystyle s}is a left zero divisor, thens⋅x<s⋅y{\displaystyle s\cdot x<s\cdot y}is false. Thenon-negativeelements are characterized by¬(x<0){\displaystyle \neg (x<0)}, which is then written0≤x{\displaystyle 0\leq x}. Generally, the strict total order can be negated to define an associated partial order. Theasymmetryof the former manifests asx<y→x≤y{\displaystyle x<y\to x\leq y}. In fact inclassical mathematicsthe latter is a (non-strict) total order and such that0≤x{\displaystyle 0\leq x}impliesx=0∨0<x{\displaystyle x=0\lor 0<x}. Likewise, given any (non-strict) total order, its negation isirreflexiveandtransitive, and those two properties found together are sometimes called strict quasi-order. Classically this defines a strict total order – indeed strict total order and total order can there be defined in terms of one another. Recall that "≤pre{\displaystyle \leq _{\text{pre}}}" defined above is trivial in any ring. The existence of rings that admit a non-trivial non-strict order shows that these need not necessarily coincide with "≤pre{\displaystyle \leq _{\text{pre}}}". A semiring in which every element is an additiveidempotent, that is,x+x=x{\displaystyle x+x=x}for all elementsx{\displaystyle x}, is called an(additively) idempotent semiring.[9]Establishing1+1=1{\displaystyle 1+1=1}suffices. Be aware that sometimes this is just called idempotent semiring, regardless of rules for multiplication. In such a semiring,x≤prey{\displaystyle x\leq _{\text{pre}}y}is equivalent tox+y=y{\displaystyle x+y=y}and always constitutes a partial order, here now denotedx≤y{\displaystyle x\leq y}. In particular, herex≤0↔x=0{\displaystyle x\leq 0\leftrightarrow x=0}. So additively idempotent semirings are zerosumfree and, indeed, the only additively idempotent semiring that has all additive inverses is the trivial ring and so this property is specific to semiring theory. Addition and multiplication respect the ordering in the sense thatx≤y{\displaystyle x\leq y}impliesx+t≤y+t{\displaystyle x+t\leq y+t}, and furthermore impliess⋅x≤s⋅y{\displaystyle s\cdot x\leq s\cdot y}as well asx⋅s≤y⋅s{\displaystyle x\cdot s\leq y\cdot s}, for allx,y,t{\displaystyle x,y,t}ands{\displaystyle s}. IfR{\displaystyle R}is additively idempotent, then so are the polynomials inR[X∗]{\displaystyle R[X^{*}]}. A semiring such that there is a lattice structure on its underlying set islattice-orderedif the sum coincides with the meet,x+y=x∨y{\displaystyle x+y=x\lor y}, and the product lies beneath the joinx⋅y≤x∧y{\displaystyle x\cdot y\leq x\land y}. The lattice-ordered semiring of ideals on a semiring is not necessarilydistributive with respect tothe lattice structure. More strictly than just additive idempotence, a semiring is calledsimpleiffx+1=1{\displaystyle x+1=1}for allx{\displaystyle x}. Then also1+1=1{\displaystyle 1+1=1}andx≤1{\displaystyle x\leq 1}for allx{\displaystyle x}. Here1{\displaystyle 1}then functions akin to an additively infinite element. IfR{\displaystyle R}is an additively idempotent semiring, then{x∈R∣x+1=1}{\displaystyle \{x\in R\mid x+1=1\}}with the inherited operations is its simple sub-semiring. An example of an additively idempotent semiring that is not simple is thetropical semiringonR∪{−∞}{\displaystyle {\mathbb {R} }\cup \{-\infty \}}with the 2-ary maximum function, with respect to the standard order, as addition. Its simple sub-semiring is trivial. Ac-semiringis an idempotent semiring and with addition defined over arbitrary sets. An additively idempotent semiring with idempotent multiplication,x2=x{\displaystyle x^{2}=x}, is calledadditively and multiplicatively idempotent semiring, but sometimes also just idempotent semiring. The commutative, simple semirings with that property are exactly the bounded distributive lattices with unique minimal and maximal element (which then are the units).Heyting algebrasare such semirings and theBoolean algebrasare a special case. Further, given two bounded distributive lattices, there are constructions resulting in commutative additively-idempotent semirings, which are more complicated than just the direct sum of structures. In a model of the ringR{\displaystyle {\mathbb {R} }}, one can define a non-trivial positivity predicate0<x{\displaystyle 0<x}and a predicatex<y{\displaystyle x<y}as0<(y−x){\displaystyle 0<(y-x)}that constitutes a strict total order, which fulfills properties such as¬(x<0∨0<x)→x=0{\displaystyle \neg (x<0\lor 0<x)\to x=0}, or classically thelaw of trichotomy. With its standard addition and multiplication, this structure forms the strictlyordered fieldthat isDedekind-complete.By definition, allfirst-order propertiesproven in the theory of the reals are also provable in thedecidable theoryof thereal closed field. For example, herex<y{\displaystyle x<y}is mutually exclusive with∃d.y+d2=x{\displaystyle \exists d.y+d^{2}=x}. But beyond just ordered fields, the four properties listed below are also still valid in many sub-semirings ofR{\displaystyle {\mathbb {R} }}, including the rationals, the integers, as well as the non-negative parts of each of these structures. In particular, the non-negative reals, the non-negative rationals and the non-negative integers are such a semirings. The first two properties are analogous to the property valid in the idempotent semirings: Translation and scaling respect theseordered rings, in the sense that addition and multiplication in this ring validate In particular,(0<y∧0<s)→0<s⋅y{\displaystyle (0<y\land 0<s)\to 0<s\cdot y}and so squaring of elements preserves positivity. Take note of two more properties that are always valid in a ring. Firstly, triviallyP→x≤prey{\displaystyle P\,\to \,x\leq _{\text{pre}}y}for anyP{\displaystyle P}. In particular, thepositiveadditive difference existence can be expressed as Secondly, in the presence of a trichotomous order, the non-zero elements of the additive group are partitioned into positive and negative elements, with the inversion operation moving between them. With(−1)2=1{\displaystyle (-1)^{2}=1}, all squares are proven non-negative. Consequently, non-trivial rings have a positive multiplicative unit, Having discussed a strict order, it follows that0≠1{\displaystyle 0\neq 1}and1≠1+1{\displaystyle 1\neq 1+1}, etc. There are a few conflicting notions of discreteness in order theory. Given some strict order on a semiring, one such notion is given by1{\displaystyle 1}being positive andcovering0{\displaystyle 0}, i.e. there being no elementx{\displaystyle x}between the units,¬(0<x∧x<1){\displaystyle \neg (0<x\land x<1)}. Now in the present context, an order shall be calleddiscreteif this is fulfilled and, furthermore, all elements of the semiring are non-negative, so that the semiring starts out with the units. Denote byPA−{\displaystyle {\mathsf {PA}}^{-}}the theory of a commutative, discretely ordered semiring also validating the above four properties relating a strict order with the algebraic structure. All of its models have the modelN{\displaystyle \mathbb {N} }as its initial segment andGödel incompletenessandTarski undefinabilityalready apply toPA−{\displaystyle {\mathsf {PA}}^{-}}. The non-negative elements of a commutative,discretely ordered ringalways validate the axioms ofPA−{\displaystyle {\mathsf {PA}}^{-}}. So a slightly more exotic model of the theory is given by the positive elements in thepolynomial ringZ[X]{\displaystyle {\mathbb {Z} }[X]}, with positivity predicate forp=∑k=0nakXk{\displaystyle p={\textstyle \sum }_{k=0}^{n}a_{k}X^{k}}defined in terms of the last non-zero coefficient,0<p:=(0<an){\displaystyle 0<p:=(0<a_{n})}, andp<q:=(0<q−p){\displaystyle p<q:=(0<q-p)}as above. WhilePA−{\displaystyle {\mathsf {PA}}^{-}}proves allΣ1{\displaystyle \Sigma _{1}}-sentencesthat are true aboutN{\displaystyle \mathbb {N} }, beyond this complexity one can find simple such statements that areindependentofPA−{\displaystyle {\mathsf {PA}}^{-}}. For example, whileΠ1{\displaystyle \Pi _{1}}-sentences true aboutN{\displaystyle \mathbb {N} }are still true for the other model just defined, inspection of the polynomialX{\displaystyle X}demonstratesPA−{\displaystyle {\mathsf {PA}}^{-}}-independence of theΠ2{\displaystyle \Pi _{2}}-claim that all numbers are of the form2q{\displaystyle 2q}or2q+1{\displaystyle 2q+1}("odd or even"). Showing that alsoZ[X,Y]/(X2−2Y2){\displaystyle {\mathbb {Z} }[X,Y]/(X^{2}-2Y^{2})}can be discretely ordered demonstrates that theΠ1{\displaystyle \Pi _{1}}-claimx2≠2y2{\displaystyle x^{2}\neq 2y^{2}}for non-zerox{\displaystyle x}("no rational squared equals2{\displaystyle 2}") is independent. Likewise, analysis forZ[X,Y,Z]/(XZ−Y2){\displaystyle {\mathbb {Z} }[X,Y,Z]/(XZ-Y^{2})}demonstrates independence of some statements aboutfactorizationtrue inN{\displaystyle \mathbb {N} }. There arePA{\displaystyle {\mathsf {PA}}}characterizations of primality thatPA−{\displaystyle {\mathsf {PA}}^{-}}does not validate for the number2{\displaystyle 2}. In the other direction, from any model ofPA−{\displaystyle {\mathsf {PA}}^{-}}one may construct an ordered ring, which then has elements that are negative with respect to the order, that is still discrete the sense that1{\displaystyle 1}covers0{\displaystyle 0}. To this end one defines an equivalence class of pairs from the original semiring. Roughly, the ring corresponds to the differences of elements in the old structure, generalizing the way in which theinitialringZ{\displaystyle \mathbb {Z} }can be defined fromN{\displaystyle \mathbb {N} }. This, in effect, adds all the inverses and then the preorder is again trivial in that∀x.x≤pre0{\displaystyle \forall x.x\leq _{\text{pre}}0}. Beyond the size of the two-element algebra, no simple semiring starts out with the units. Being discretely ordered also stands in contrast to, e.g., the standard ordering on the semiring of non-negative rationalsQ≥0{\displaystyle {\mathbb {Q} }_{\geq 0}}, which isdensebetween the units. For another example,Z[X]/(2X2−1){\displaystyle {\mathbb {Z} }[X]/(2X^{2}-1)}can be ordered, but not discretely so. PA−{\displaystyle {\mathsf {PA}}^{-}}plusmathematical inductiongivesa theory equivalent tofirst-orderPeano arithmeticPA{\displaystyle {\mathsf {PA}}}. The theory is also famously notcategorical, butN{\displaystyle \mathbb {N} }is of course the intended model.PA{\displaystyle {\mathsf {PA}}}proves that there are no zero divisors and it is zerosumfree and so nomodel of itis a ring. The standard axiomatization ofPA{\displaystyle {\mathsf {PA}}}is more concise and the theory of its order is commonly treated in terms of the non-strict "≤pre{\displaystyle \leq _{\text{pre}}}". However, just removing the potent induction principle from that axiomatization does not leave a workable algebraic theory. Indeed, evenRobinson arithmeticQ{\displaystyle {\mathsf {Q}}}, which removes induction but adds back the predecessor existence postulate, does not prove the monoid axiom∀y.(0+y=y){\displaystyle \forall y.(0+y=y)}. Acomplete semiringis a semiring for which the additive monoid is acomplete monoid, meaning that it has aninfinitarysum operationΣI{\displaystyle \Sigma _{I}}for anyindex setI{\displaystyle I}and that the following (infinitary) distributive laws must hold:[10][11][12] Examples of a complete semiring are the power set of a monoid under union and the matrix semiring over a complete semiring.[13]For commutative, additively idempotent and simple semirings, this property is related toresiduated lattices. Acontinuous semiringis similarly defined as one for which the addition monoid is acontinuous monoid. That is, partially ordered with theleast upper bound property, and for which addition and multiplication respect order and suprema. The semiringN∪{∞}{\displaystyle \mathbb {N} \cup \{\infty \}}with usual addition, multiplication and order extended is a continuous semiring.[14] Any continuous semiring is complete:[10]this may be taken as part of the definition.[13] Astar semiring(sometimes spelledstarsemiring) orclosed semiringis a semiring with an additional unary operator∗{\displaystyle {}^{*}},[9][11][15][16]satisfying AKleene algebrais a star semiring with idempotent addition and some additional axioms. They are important in the theory offormal languagesandregular expressions.[11] In acomplete star semiring, the star operator behaves more like the usualKleene star: for a complete semiring we use the infinitary sum operator to give the usual definition of the Kleene star:[11] where Note that star semirings are not related to*-algebra, where the star operation should instead be thought of ascomplex conjugation. AConway semiringis a star semiring satisfying the sum-star and product-star equations:[9][17] Every complete star semiring is also a Conway semiring,[18]but the converse does not hold. An example of Conway semiring that is not complete is the set of extended non-negativerational numbersQ≥0∪{∞}{\displaystyle \mathbb {Q} _{\geq 0}\cup \{\infty \}}with the usual addition and multiplication (this is a modification of the example with extended non-negative reals given in this section by eliminating irrational numbers).[11]Aniteration semiringis a Conway semiring satisfying the Conway group axioms,[9]associated byJohn Conwayto groups in star-semirings.[19] Similarly, in the presence of an appropriate order with bottom element, More using monoids, Regarding sets and similar abstractions, Several structures mentioned above can be equipped with a star operation. The(max,+){\displaystyle (\max ,+)}and(min,+){\displaystyle (\min ,+)}tropical semiringson the reals are often used inperformance evaluationon discrete event systems. The real numbers then are the "costs" or "arrival time"; the "max" operation corresponds to having to wait for all prerequisites of an events (thus taking the maximal time) while the "min" operation corresponds to being able to choose the best, less costly choice; and + corresponds to accumulation along the same path. TheFloyd–Warshall algorithmforshortest pathscan thus be reformulated as a computation over a(min,+){\displaystyle (\min ,+)}algebra. Similarly, theViterbi algorithmfor finding the most probable state sequence corresponding to an observation sequence in ahidden Markov modelcan also be formulated as a computation over a(max,×){\displaystyle (\max ,\times )}algebra on probabilities. Thesedynamic programmingalgorithms rely on thedistributive propertyof their associated semirings to compute quantities over a large (possibly exponential) number of terms more efficiently than enumerating each of them.[28][29] A generalization of semirings does not require the existence of a multiplicative identity, so that multiplication is asemigrouprather than a monoid. Such structures are calledhemirings[30]orpre-semirings.[31]A further generalization areleft-pre-semirings,[32]which additionally do not require right-distributivity (orright-pre-semirings, which do not require left-distributivity). Yet a further generalization arenear-semirings: in addition to not requiring a neutral element for product, or right-distributivity (or left-distributivity), they do not require addition to be commutative. Just as cardinal numbers form a (class) semiring, so doordinal numbersform anear-semiring, when the standardordinal addition and multiplicationare taken into account. However, the class of ordinals can be turned into a semiring by considering the so-callednatural (or Hessenberg) operationsinstead. Incategory theory, a2-rigis a category withfunctorialoperations analogous to those of a rig. That the cardinal numbers form a rig can be categorified to say that thecategory of sets(or more generally, anytopos) is a 2-rig.
https://en.wikipedia.org/wiki/Semiring
Incommutative algebra, theprime spectrum(or simply thespectrum) of acommutative ringR{\displaystyle R}is the set of allprime idealsofR{\displaystyle R}, and is usually denoted bySpec⁡R{\displaystyle \operatorname {Spec} {R}};[1]inalgebraic geometryit is simultaneously atopological spaceequipped with asheaf of rings.[2] For anyidealI{\displaystyle I}ofR{\displaystyle R}, defineVI{\displaystyle V_{I}}to be the set ofprime idealscontainingI{\displaystyle I}. We can put atopologyonSpec⁡(R){\displaystyle \operatorname {Spec} (R)}by defining thecollection of closed setsto be This topology is called theZariski topology. Abasisfor the Zariski topology can be constructed as follows: Forf∈R{\displaystyle f\in R}, defineDf{\displaystyle D_{f}}to be the set of prime ideals ofR{\displaystyle R}not containingf{\displaystyle f}. Then eachDf{\displaystyle D_{f}}is an open subset ofSpec⁡(R){\displaystyle \operatorname {Spec} (R)}, and{Df:f∈R}{\displaystyle {\big \{}D_{f}:f\in R{\big \}}}is a basis for the Zariski topology. Spec⁡(R){\displaystyle \operatorname {Spec} (R)}is acompact space, but almost neverHausdorff: In fact, themaximal idealsinR{\displaystyle R}are precisely the closed points in this topology. By the same reasoning,Spec⁡(R){\displaystyle \operatorname {Spec} (R)}is not, in general, aT1space.[3]However,Spec⁡(R){\displaystyle \operatorname {Spec} (R)}is always aKolmogorov space(satisfies the T0axiom); it is also aspectral space. Given the spaceX=Spec⁡(R){\displaystyle X=\operatorname {Spec} (R)}with the Zariski topology, thestructure sheafOX{\displaystyle {\mathcal {O}}_{X}}is defined on the distinguished open subsetsDf{\displaystyle D_{f}}by settingΓ(Df,OX)=Rf,{\displaystyle \Gamma (D_{f},{\mathcal {O}}_{X})=R_{f},}thelocalizationofR{\displaystyle R}by the powers off{\displaystyle f}. It can be shown that this defines aB-sheafand therefore that it defines asheaf. In more detail, the distinguished open subsets are abasisof the Zariski topology, so for an arbitrary open setU{\displaystyle U}, written as the union ofU=⋃i∈IDfi{\textstyle U=\bigcup _{i\in I}D_{f_{i}}}, we setΓ(U,OX)=lim←i∈I⁡Rfi,{\textstyle \Gamma (U,{\mathcal {O}}_{X})=\varprojlim _{i\in I}R_{f_{i}},}wherelim←{\displaystyle \varprojlim }denotes theinverse limitwith respect to the natural ring homomorphismsRf→Rfg.{\displaystyle R_{f}\to R_{fg}.}One may check that thispresheafis a sheaf, soSpec⁡(R){\displaystyle \operatorname {Spec} (R)}is aringed space. Any ringed space isomorphic to one of this form is called anaffine scheme. Generalschemesare obtained by gluing affine schemes together. Similarly, for amoduleM{\displaystyle M}over the ringR{\displaystyle R}, we may define a sheafM~{\displaystyle {\widetilde {M}}}onSpec⁡(R){\displaystyle \operatorname {Spec} (R)}. On the distinguished open subsets setΓ(Df,M~)=Mf,{\displaystyle \Gamma (D_{f},{\widetilde {M}})=M_{f},}using thelocalization of a module. As above, this construction extends to a presheaf on all open subsets ofSpec⁡(R){\displaystyle \operatorname {Spec} (R)}and satisfies thegluing axiom. A sheaf of this form is called aquasicoherent sheaf. Ifp{\displaystyle {\mathfrak {p}}}is a point inSpec⁡(R){\displaystyle \operatorname {Spec} (R)}, that is, a prime ideal, then thestalkof the structure sheaf atp{\displaystyle {\mathfrak {p}}}equals thelocalizationofR{\displaystyle R}at the idealp{\displaystyle {\mathfrak {p}}}, which is generally denotedRp{\displaystyle R_{\mathfrak {p}}}, and this is alocal ring. Consequently,Spec⁡(R){\displaystyle \operatorname {Spec} (R)}is alocally ringed space. IfR{\displaystyle R}is anintegral domain, withfield of fractionsK{\displaystyle K}, then we can describe the ringΓ(U,OX){\displaystyle \Gamma (U,{\mathcal {O}}_{X})}more concretely as follows. We say that an elementf{\displaystyle f}inK{\displaystyle K}is regular at a pointp{\displaystyle {\mathfrak {p}}}inX=Spec⁡R{\displaystyle X=\operatorname {Spec} {R}}if it can be represented as a fractionf=a/b{\displaystyle f=a/b}withb∉p{\displaystyle b\notin {\mathfrak {p}}}. Note that this agrees with the notion of aregular functionin algebraic geometry. Using this definition, we can describeΓ(U,OX){\displaystyle \Gamma (U,{\mathcal {O}}_{X})}as precisely the set of elements ofK{\displaystyle K}that are regular at every pointp{\displaystyle {\mathfrak {p}}}inU{\displaystyle U}. It is useful to use the language ofcategory theoryand observe thatSpec{\displaystyle \operatorname {Spec} }is afunctor. Everyring homomorphismf:R→S{\displaystyle f:R\to S}induces acontinuousmapSpec⁡(f):Spec⁡(S)→Spec⁡(R){\displaystyle \operatorname {Spec} (f):\operatorname {Spec} (S)\to \operatorname {Spec} (R)}(since the preimage of any prime ideal inS{\displaystyle S}is a prime ideal inR{\displaystyle R}). In this way,Spec{\displaystyle \operatorname {Spec} }can be seen as a contravariant functor from thecategory of commutative ringsto thecategory of topological spaces. Moreover, for every primep{\displaystyle {\mathfrak {p}}}the homomorphismf{\displaystyle f}descends to homomorphisms of local rings. ThusSpec{\displaystyle \operatorname {Spec} }even defines a contravariant functor from the category of commutative rings to the category oflocally ringed spaces. In fact it is the universal such functor, and hence can be used to define the functorSpec{\displaystyle \operatorname {Spec} }up tonatural isomorphism.[citation needed] The functorSpec{\displaystyle \operatorname {Spec} }yields a contravariantequivalencebetween thecategory of commutative ringsand thecategory of affine schemes; each of these categories is often thought of as theopposite categoryof the other. Following on from the example, inalgebraic geometryone studiesalgebraic sets, i.e. subsets ofKn{\displaystyle K^{n}}(whereK{\displaystyle K}is analgebraically closed field) that are defined as the common zeros of a set ofpolynomialsinn{\displaystyle n}variables. IfA{\displaystyle A}is such an algebraic set, one considers the commutative ringR{\displaystyle R}of allpolynomial functionsA→K{\displaystyle A\to K}. Themaximal idealsofR{\displaystyle R}correspond to the points ofA{\displaystyle A}(becauseK{\displaystyle K}is algebraically closed), and theprime idealsofR{\displaystyle R}correspond to theirreducible subvarietiesofA{\displaystyle A}(an algebraic set is calledirreducibleif it cannot be written as the union of two proper algebraic subsets). The spectrum ofR{\displaystyle R}therefore consists of the points ofA{\displaystyle A}together with elements for all irreducible subvarieties ofA{\displaystyle A}. The points ofA{\displaystyle A}are closed in the spectrum, while the elements corresponding to subvarieties have a closure consisting of all their points and subvarieties. If one only considers the points ofA{\displaystyle A}, i.e. the maximal ideals inR{\displaystyle R}, then the Zariski topology defined above coincides with the Zariski topology defined on algebraic sets (which has precisely the algebraic subsets as closed sets). Specifically, the maximal ideals inR{\displaystyle R}, i.e.MaxSpec⁡(R){\displaystyle \operatorname {MaxSpec} (R)}, together with the Zariski topology, ishomeomorphictoA{\displaystyle A}also with the Zariski topology. One can thus view the topological spaceSpec⁡(R){\displaystyle \operatorname {Spec} (R)}as an "enrichment" of the topological spaceA{\displaystyle A}(with Zariski topology): for every irreducible subvariety ofA{\displaystyle A}, one additional non-closed point has been introduced, and this point "keeps track" of the corresponding irreducible subvariety. One thinks of this point as thegeneric pointfor the irreducible subvariety. Furthermore, the structure sheaf onSpec⁡(R){\displaystyle \operatorname {Spec} (R)}and the sheaf of polynomial functions onA{\displaystyle A}are essentially identical. By studying spectra of polynomial rings instead of algebraic sets with the Zariski topology, one can generalize the concepts of algebraic geometry to non-algebraically closed fields and beyond, eventually arriving at the language ofschemes. Here are some examples of schemes that are not affine schemes. They are constructed from gluing affine schemes together. Some authors (notably M. Hochster) consider topologies on prime spectra other than the Zariski topology. First, there is the notion ofconstructible topology: given a ringA, the subsets ofSpec⁡(A){\displaystyle \operatorname {Spec} (A)}of the formφ∗(Spec⁡B),φ:A→B{\displaystyle \varphi ^{*}(\operatorname {Spec} B),\varphi :A\to B}satisfy the axioms for closed sets in a topological space. This topology onSpec⁡(A){\displaystyle \operatorname {Spec} (A)}is called the constructible topology.[7][8] InHochster (1969), Hochster considers what he calls the patch topology on a prime spectrum.[9][10][11]By definition, the patch topology is the smallest topology in which the sets of the formsV(I){\displaystyle V(I)}andSpec⁡(A)−V(f){\displaystyle \operatorname {Spec} (A)-V(f)}are closed. There is a relative version of the functorSpec{\displaystyle \operatorname {Spec} }called globalSpec{\displaystyle \operatorname {Spec} }, or relativeSpec{\displaystyle \operatorname {Spec} }. IfS{\displaystyle S}is a scheme, then relativeSpec{\displaystyle \operatorname {Spec} }is denoted bySpec_S{\displaystyle {\underline {\operatorname {Spec} }}_{S}}orSpecS{\displaystyle \mathbf {Spec} _{S}}. IfS{\displaystyle S}is clear from the context, then relative Spec may be denoted bySpec_{\displaystyle {\underline {\operatorname {Spec} }}}orSpec{\displaystyle \mathbf {Spec} }. For a schemeS{\displaystyle S}and aquasi-coherentsheaf ofOS{\displaystyle {\mathcal {O}}_{S}}-algebrasA{\displaystyle {\mathcal {A}}}, there is a schemeSpec_S(A){\displaystyle {\underline {\operatorname {Spec} }}_{S}({\mathcal {A}})}and a morphismf:Spec_S(A)→S{\displaystyle f:{\underline {\operatorname {Spec} }}_{S}({\mathcal {A}})\to S}such that for every open affineU⊆S{\displaystyle U\subseteq S}, there is an isomorphismf−1(U)≅Spec⁡(A(U)){\displaystyle f^{-1}(U)\cong \operatorname {Spec} ({\mathcal {A}}(U))}, and such that for open affinesV⊆U{\displaystyle V\subseteq U}, the inclusionf−1(V)→f−1(U){\displaystyle f^{-1}(V)\to f^{-1}(U)}is induced by the restriction mapA(U)→A(V){\displaystyle {\mathcal {A}}(U)\to {\mathcal {A}}(V)}. That is, as ring homomorphisms induce opposite maps of spectra, the restriction maps of a sheaf of algebras induce the inclusion maps of the spectra that make up theSpecof the sheaf. Global Spec has a universal property similar to the universal property for ordinary Spec. More precisely, just as Spec and the global section functor are contravariant right adjoints between the category of commutative rings and schemes, global Spec and the direct image functor for the structure map are contravariant right adjoints between the category of commutativeOS{\displaystyle {\mathcal {O}}_{S}}-algebras and schemes overS{\displaystyle S}.[dubious–discuss]In formulas, whereπ:X→S{\displaystyle \pi \colon X\to S}is a morphism of schemes. The relative spec is the correct tool for parameterizing the family of lines through the origin ofAC2{\displaystyle \mathbb {A} _{\mathbb {C} }^{2}}overX=Pa,b1.{\displaystyle X=\mathbb {P} _{a,b}^{1}.}Consider the sheaf of algebrasA=OX[x,y],{\displaystyle {\mathcal {A}}={\mathcal {O}}_{X}[x,y],}and letI=(ay−bx){\displaystyle {\mathcal {I}}=(ay-bx)}be a sheaf of ideals ofA.{\displaystyle {\mathcal {A}}.}Then the relative specSpec_X(A/I)→Pa,b1{\displaystyle {\underline {\operatorname {Spec} }}_{X}({\mathcal {A}}/{\mathcal {I}})\to \mathbb {P} _{a,b}^{1}}parameterizes the desired family. In fact, the fiber over[α:β]{\displaystyle [\alpha :\beta ]}is the line through the origin ofA2{\displaystyle \mathbb {A} ^{2}}containing the point(α,β).{\displaystyle (\alpha ,\beta ).}Assumingα≠0,{\displaystyle \alpha \neq 0,}the fiber can be computed by looking at the composition of pullback diagrams where the composition of the bottom arrows gives the line containing the point(α,β){\displaystyle (\alpha ,\beta )}and the origin. This example can be generalized to parameterize the family of lines through the origin ofACn+1{\displaystyle \mathbb {A} _{\mathbb {C} }^{n+1}}overX=Pa0,...,ann{\displaystyle X=\mathbb {P} _{a_{0},...,a_{n}}^{n}}by lettingA=OX[x0,...,xn]{\displaystyle {\mathcal {A}}={\mathcal {O}}_{X}[x_{0},...,x_{n}]}andI=(2×2minors of(a0⋯anx0⋯xn)).{\displaystyle {\mathcal {I}}=\left(2\times 2{\text{ minors of }}{\begin{pmatrix}a_{0}&\cdots &a_{n}\\x_{0}&\cdots &x_{n}\end{pmatrix}}\right).} From the perspective ofrepresentation theory, a prime idealIcorresponds to a moduleR/I, and the spectrum of a ring corresponds toirreduciblecyclic representations ofR, while more general subvarieties correspond to possibly reducible representations that need not be cyclic. Recall that abstractly, the representation theory of agroupis the study of modules over itsgroup algebra. The connection to representation theory is clearer if one considers thepolynomial ringR=K[x1,…,xn]{\displaystyle R=K[x_{1},\dots ,x_{n}]}or, without a basis,R=K[V].{\displaystyle R=K[V].}As the latter formulation makes clear, a polynomial ring is the group algebra over avector space, and writing in terms ofxi{\displaystyle x_{i}}corresponds to choosing a basis for the vector space. Then an idealI,or equivalently a moduleR/I,{\displaystyle R/I,}is a cyclic representation ofR(cyclicmeaning generated by 1 element as anR-module; this generalizes 1-dimensional representations). In the case that the field is algebraically closed (say, the complex numbers), every maximal ideal corresponds to a point inn-space, by theNullstellensatz(the maximal ideal generated by(x1−a1),(x2−a2),…,(xn−an){\displaystyle (x_{1}-a_{1}),(x_{2}-a_{2}),\ldots ,(x_{n}-a_{n})}corresponds to the point(a1,…,an){\displaystyle (a_{1},\ldots ,a_{n})}). These representations ofK[V]{\displaystyle K[V]}are then parametrized by thedual spaceV∗,{\displaystyle V^{*},}the covector being given by sending eachxi{\displaystyle x_{i}}to the correspondingai{\displaystyle a_{i}}. Thus a representation ofKn{\displaystyle K^{n}}(K-linear mapsKn→K{\displaystyle K^{n}\to K}) is given by a set ofnnumbers, or equivalently a covectorKn→K.{\displaystyle K^{n}\to K.} Thus, points inn-space, thought of as the max spec ofR=K[x1,…,xn],{\displaystyle R=K[x_{1},\dots ,x_{n}],}correspond precisely to 1-dimensional representations ofR, while finite sets of points correspond to finite-dimensional representations (which are reducible, corresponding geometrically to being a union, and algebraically to not being a prime ideal). The non-maximal ideals then correspond toinfinite-dimensional representations. The term "spectrum" comes from the use inoperator theory. Given alinear operatorTon afinite-dimensionalvector spaceV, one can consider the vector space with operator as a module over the polynomial ring in one variableR=K[T], as in thestructure theorem for finitely generated modules over a principal ideal domain. Then the spectrum ofK[T] (as a ring) equals the spectrum ofT(as an operator). Further, the geometric structure of the spectrum of the ring (equivalently, the algebraic structure of the module) captures the behavior of the spectrum of the operator, such as algebraic multiplicity and geometric multiplicity. For instance, for the 2×2 identity matrix has corresponding module: the 2×2 zero matrix has module showing geometric multiplicity 2 for the zeroeigenvalue, while a non-trivial 2×2 nilpotent matrix has module showing algebraic multiplicity 2 but geometric multiplicity 1. In more detail: The spectrum can be generalized from rings toC*-algebrasinoperator theory, yielding the notion of thespectrum of a C*-algebra. Notably, for aHausdorff space, thealgebra of scalars(the bounded continuous functions on the space, being analogous to regular functions) is acommutativeC*-algebra, with the space being recovered as a topological space fromMaxSpec{\displaystyle \operatorname {MaxSpec} }of the algebra of scalars, indeed functorially so; this is the content of theBanach–Stone theorem. Indeed, any commutative C*-algebra can be realized as the algebra of scalars of a Hausdorff space in this way, yielding the same correspondence as between a ring and its spectrum. Generalizing tonon-commutative C*-algebras yieldsnoncommutative topology.
https://en.wikipedia.org/wiki/Spectrum_of_a_ring
Inalgebra, asimplicial commutative ringis acommutative monoidin thecategoryofsimplicial abelian groups, or, equivalently, asimplicial objectin thecategory of commutative rings. IfAis a simplicial commutative ring, then it can be shown thatπ0A{\displaystyle \pi _{0}A}is aringandπiA{\displaystyle \pi _{i}A}aremodulesover that ring (in fact,π∗A{\displaystyle \pi _{*}A}is agraded ringoverπ0A{\displaystyle \pi _{0}A}.) Atopology-counterpart of this notion is acommutative ring spectrum. LetAbe a simplicial commutative ring. Then the ring structure ofAgivesπ∗A=⊕i≥0πiA{\displaystyle \pi _{*}A=\oplus _{i\geq 0}\pi _{i}A}the structure of a graded-commutative graded ring as follows. By theDold–Kan correspondence,π∗A{\displaystyle \pi _{*}A}is the homology of thechain complexcorresponding toA; in particular, it is a graded abelian group. Next, to multiply two elements, writingS1{\displaystyle S^{1}}for thesimplicial circle, letx:(S1)∧i→A,y:(S1)∧j→A{\displaystyle x:(S^{1})^{\wedge i}\to A,\,\,y:(S^{1})^{\wedge j}\to A}be two maps. Then the composition the second map the multiplication ofA, induces(S1)∧i∧(S1)∧j→A{\displaystyle (S^{1})^{\wedge i}\wedge (S^{1})^{\wedge j}\to A}. This in turn gives an element inπi+jA{\displaystyle \pi _{i+j}A}. We have thus defined the graded multiplicationπiA×πjA→πi+jA{\displaystyle \pi _{i}A\times \pi _{j}A\to \pi _{i+j}A}. It isassociativebecause the smash product is. It isgraded-commutative(i.e.,xy=(−1)|x||y|yx{\displaystyle xy=(-1)^{|x||y|}yx}) since the involutionS1∧S1→S1∧S1{\displaystyle S^{1}\wedge S^{1}\to S^{1}\wedge S^{1}}introduces a minus sign. IfMis asimplicial moduleoverA(that is,Mis asimplicial abelian groupwith an action ofA), then the similar argument shows thatπ∗M{\displaystyle \pi _{*}M}has the structure of a graded module overπ∗A{\displaystyle \pi _{*}A}(cf.Module spectrum). By definition, the category of affinederived schemesis theopposite categoryof the category of simplicial commutative rings; an object corresponding toAwill be denoted bySpec⁡A{\displaystyle \operatorname {Spec} A}. Thiscommutative algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Simplicial_commutative_ring
Incryptography, theElliptic Curve Digital Signature Algorithm(ECDSA) offers a variant of theDigital Signature Algorithm(DSA) which useselliptic-curve cryptography. As with elliptic-curve cryptography in general, the bitsizeof theprivate keybelieved to be needed for ECDSA is about twice the size of thesecurity level, in bits.[1]For example, at a security level of 80 bits—meaning an attacker requires a maximum of about280{\displaystyle 2^{80}}operations to find the private key—the size of an ECDSA private key would be 160 bits. On the other hand, the signature size is the same for both DSA and ECDSA: approximately4t{\displaystyle 4t}bits, wheret{\displaystyle t}is the exponent in the formula2t{\displaystyle 2^{t}}, that is, about 320 bits for a security level of 80 bits, which is equivalent to280{\displaystyle 2^{80}}operations. SupposeAlicewants to send a signed message toBob. Initially, they must agree on the curve parameters(CURVE,G,n){\displaystyle ({\textrm {CURVE}},G,n)}. In addition to thefieldand equation of the curve, we needG{\displaystyle G}, a base point of prime order on the curve;n{\displaystyle n}is the additive order of the pointG{\displaystyle G}. The ordern{\displaystyle n}of the base pointG{\displaystyle G}must be prime. Indeed, we assume that every nonzero element of theringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is invertible, so thatZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }must be afield. It implies thatn{\displaystyle n}must be prime (cf.Bézout's identity). Alice creates a key pair, consisting of a private key integerdA{\displaystyle d_{A}}, randomly selected in the interval[1,n−1]{\displaystyle [1,n-1]}; and a public key curve pointQA=dA×G{\displaystyle Q_{A}=d_{A}\times G}. We use×{\displaystyle \times }to denoteelliptic curve point multiplication by a scalar. For Alice to sign a messagem{\displaystyle m}, she follows these steps: As the standard notes, it is not only required fork{\displaystyle k}to be secret, but it is also crucial to select differentk{\displaystyle k}for different signatures. Otherwise, the equation in step 6 can be solved fordA{\displaystyle d_{A}}, the private key: given two signatures(r,s){\displaystyle (r,s)}and(r,s′){\displaystyle (r,s')}, employing the same unknownk{\displaystyle k}for different known messagesm{\displaystyle m}andm′{\displaystyle m'}, an attacker can calculatez{\displaystyle z}andz′{\displaystyle z'}, and sinces−s′=k−1(z−z′){\displaystyle s-s'=k^{-1}(z-z')}(all operations in this paragraph are done modulon{\displaystyle n}) the attacker can findk=z−z′s−s′{\displaystyle k={\frac {z-z'}{s-s'}}}. Sinces=k−1(z+rdA){\displaystyle s=k^{-1}(z+rd_{A})}, the attacker can now calculate the private keydA=sk−zr{\displaystyle d_{A}={\frac {sk-z}{r}}}. This implementation failure was used, for example, to extract the signing key used for thePlayStation 3gaming-console.[3] Another way ECDSA signature may leak private keys is whenk{\displaystyle k}is generated by a faultyrandom number generator. Such a failure in random number generation caused users of Android Bitcoin Wallet to lose their funds in August 2013.[4] To ensure thatk{\displaystyle k}is unique for each message, one may bypass random number generation completely and generate deterministic signatures by derivingk{\displaystyle k}from both the message and the private key.[5] For Bob to authenticate Alice's signaturer,s{\displaystyle r,s}on a messagem{\displaystyle m}, he must have a copy of her public-key curve pointQA{\displaystyle Q_{A}}. Bob can verifyQA{\displaystyle Q_{A}}is a valid curve point as follows: After that, Bob follows these steps: Note that an efficient implementation would compute inverses−1modn{\displaystyle s^{-1}\,{\bmod {\,}}n}only once. Also, using Shamir's trick, a sum of two scalar multiplicationsu1×G+u2×QA{\displaystyle u_{1}\times G+u_{2}\times Q_{A}}can be calculated faster than two scalar multiplications done independently.[6] It is not immediately obvious why verification even functions correctly. To see why, denote asCthe curve point computed in step 5 of verification, From the definition of the public key asQA=dA×G{\displaystyle Q_{A}=d_{A}\times G}, Because elliptic curve scalar multiplication distributes over addition, Expanding the definition ofu1{\displaystyle u_{1}}andu2{\displaystyle u_{2}}from verification step 4, Collecting the common terms−1{\displaystyle s^{-1}}, Expanding the definition ofsfrom signature step 6, Since the inverse of an inverse is the original element, and the product of an element's inverse and the element is the identity, we are left with From the definition ofr, this is verification step 6. This shows only that a correctly signed message will verify correctly; other properties such as incorrectly signed messages failing to verify correctly and resistance tocryptanalyticattacks are required for a secure signature algorithm. Given a messagemand Alice's signaturer,s{\displaystyle r,s}on that message, Bob can (potentially) recover Alice's public key:[7] Note that an invalid signature, or a signature from a different message, will result in the recovery of an incorrect public key. The recovery algorithm can only be used to check validity of a signature if the signer's public key (or its hash) is known beforehand. Start with the definition ofQA{\displaystyle Q_{A}}from recovery step 6, From the definitionR=(x1,y1)=k×G{\displaystyle R=(x_{1},y_{1})=k\times G}from signing step 4, Because elliptic curve scalar multiplication distributes over addition, Expanding the definition ofu1{\displaystyle u_{1}}andu2{\displaystyle u_{2}}from recovery step 5, Expanding the definition ofsfrom signature step 6, Since the product of an element's inverse and the element is the identity, we are left with The first and second terms cancel each other out, From the definition ofQA=dA×G{\displaystyle Q_{A}=d_{A}\times G}, this is Alice's public key. This shows that a correctly signed message will recover the correct public key, provided additional information was shared to uniquely calculate curve pointR=(x1,y1){\displaystyle R=(x_{1},y_{1})}from signature valuer. In December 2010, a group calling itselffail0verflowannounced the recovery of the ECDSA private key used bySonyto sign software for thePlayStation 3game console. However, this attack only worked because Sony did not properly implement the algorithm, becausek{\displaystyle k}was static instead of random. As pointed out in theSignature generation algorithmsection above, this makesdA{\displaystyle d_{A}}solvable, rendering the entire algorithm useless.[8] On March 29, 2011, two researchers published anIACRpaper[9]demonstrating that it is possible to retrieve a TLS private key of a server usingOpenSSLthat authenticates with Elliptic Curves DSA over a binaryfieldvia atiming attack.[10]The vulnerability was fixed in OpenSSL 1.0.0e.[11] In August 2013, it was revealed that bugs in some implementations of theJavaclassSecureRandomsometimes generated collisions in thek{\displaystyle k}value. This allowed hackers to recover private keys giving them the same control over bitcoin transactions as legitimate keys' owners had, using the same exploit that was used to reveal the PS3 signing key on someAndroidapp implementations, which use Java and rely on ECDSA to authenticate transactions.[12] This issue can be prevented by deterministic generation of k, as described by RFC 6979. Some concerns expressed about ECDSA: Below is a list of cryptographic libraries that provide support for ECDSA:
https://en.wikipedia.org/wiki/ECDSA
ACP-131[1]is the controlling publication for the listing ofQ codesandZ codes.It is published and revised from time to time by theCombined Communications Electronics Board(CCEB) countries: Australia, New Zealand, Canada, United Kingdom, and United States. When the meanings of the codes contained in ACP-131 are translated into various languages, the codes provide a means of communicating between ships of various nations, such as during aNATOexercise, where there is no commonlanguage. The original edition of ACP-131 was published by the U.S. military during the early years[when?]ofradiotelegraphyfor use byradio operatorsusingMorse Codeoncontinuous wave(CW) telegraphy. It became especially useful, and even essential, towirelessradio operators on both military and civilian ships at sea before the development of advancedsingle-sidebandtelephonyin the 1960s. Radio communications, prior to the advent oflandlinesandsatellitesas communication paths and relays, was always subject to unpredictable fade outs caused byweatherconditions, practical limits on available emission power at thetransmitter,radio frequencyof the transmission, type of emission, type of transmittingantenna,signalwaveform characteristics, modulation scheme in use, sensitivity of thereceiverand presence, or lack of presence, of atmospheric reflective layers above the earth, such as theE-layerandF-layers, the type of receiving antenna, the time of day, and numerous other factors. Because of these factors which often resulted in limiting periods of transmission time on certain frequencies to only several hours a day, or only several minutes, it was found necessary to keep each wireless transmission as short as possible and to still get the message through. This was particularly true of CW radio circuits shared by a number of operators, with some waiting their turn to transmit. As a result, an operator communicating byradio telegraphyto another operator, wanting to know how the other operator was receiving the signal, could send out a message on his key inMorse Codestating, "How are you receiving me?" Using ACP-131 codes, the question could be phrased simply "INTQRK" resulting in much more efficient use of circuit time. If the receiver hears the sender in a "loud and clear" condition, the response would be "QRK 5X5": All of which requires less circuit time and less "pounding" on the key by the sending operators. Should the receiving operator not understand the sending operator, the receiving operator would send "?" or the marginally shorterINT The other operator would respond again with: which is much easier than retransmitting "How are you receiving me?" If the receiving operator understood the sending operator, the receiving operator would say the word "ROGER" or "MESSAGE RECEIVED", or the send the short form "R" "R" and "?" are similarly structured, but very easy to distinguish. According to ACP-125(F), paragraphs 103 and 104, in radio communication among Allied military units: Some assert that the use of Q codes and Z codes was not intended for use on voice circuits, where plain language was speedy and easily recognizable, especially when employing the character recognition system in use at the time, such asALPHA,BRAVO,CHARLIE,etc. However, in military communication the latter are still in use.[2] A typicalsimplexmilitary voice exchange: However, some voice operators, such asamateur radiooperators, find it convenient or traditional to carry over some of the Q codes to voice ("phone") exchanges, such as "QSL", "QRK", "QTH",etc.
https://en.wikipedia.org/wiki/ACP_131
Alfred Lewis Vail(September 25, 1807 – January 18, 1859) was an Americanmachinistand inventor. Along withSamuel Morse, Vail was central in developing and commercializing Americanelectrical telegraphybetween 1837 and 1844.[1][a] Vail and Morse were the first two telegraph operators on Morse's first experimental line betweenWashington, D.C., andBaltimore, and Vail took charge of building and managing several early telegraph lines between 1845 and 1848. He was also responsible for several technical innovations of Morse's system, particularly the firstsending key, which Vail invented,[2]and improved recordingregistersandrelaymagnets. Vail left the telegraph industry in 1848 because he believed that the managers of Morse's lines did not fully value his contributions. His last assignment, superintendent of theWashington and New Orleans Telegraph Company, paid him only $900 a year, leading Vail to write to Morse,[3] I have made up my mind to leave the Telegraph to take care of itself, since it cannot take care of me. I shall, in a few months, leave Washington for New Jersey,... and bidadieuto the subject of the Telegraph for some more profitable business. Vail's parents were Bethiah Youngs (1778–1847) andStephen Vail(1780–1864). Vail was born inMorristown, New Jersey, where his father was an entrepreneur and industrialist who built theSpeedwell Ironworksinto one of the most innovative iron works of its time.[4]Their other sonGeorge Vail, Alfred's brother, was a noted politician. Alfred attended public schools before taking a job as amachinistat the iron works. He enrolled inNew York Universityto study theology in 1832, where he was an active and successful student and a member of theEucleian Society, graduating in 1836.[1] Visiting hisalma materon September 2, 1837, Vail happened to witness one ofSamuel Morse's early telegraph experiments. He became fascinated by the technology and negotiated an arrangement with Morse to develop the technology atSpeedwell Ironworks, at his own expense, in return for 25% of the proceeds. Alfred split his share with his brother George Vail. After having secured his father's financial backing, and being a skilled machinist, Vail refined Morse's crude prototype telegraph to make it suitable for public demonstration and commercial operation. The first successful completion of a transmission with this system was at the Speedwell Iron Works on January 6, 1838, across two miles (3 km) of wire. The message read "A patient waiter is no loser." Over the next few months Morse and Vail demonstrated the telegraph to Philadelphia'sFranklin Institute, members of Congress, and PresidentMartin Van Burenand his cabinet. Demonstrations such as these were crucial to Morse's obtaining a Congressional appropriation of $30,000 to build his first line in 1844 from Washington to Baltimore. When Morse took on an influential congressman as a partner,[b]Morse cut the Vail brothers' share to one-eighth, although the other partners' shares were not reduced. Morse retained patent rights to all the apparatus and the alphabetic code-system that Vail had developed. Vail retired from the telegraph operations in 1848 and moved back to Morristown, where he spent his last ten years researchinggenealogy. Since Alfred and his brother George shared a one-eighth interest in Morse's telegraph patents, Vail realized far less financial gain from his work on the telegraph than Morse and others. His papers and equipment were subsequently donated by his son Stephen to theSmithsonian InstitutionandNew Jersey Historical Society. Alfred Vail's cousin,Theodore N. Vail, became the first president ofAmerican Telephone & Telegraph. Alfred Vail and Samuel Morse collaborated in the invention ofMorse code. The "Morse code" that went into operational use after Vail had become involved was very different fromMorse's original plan.[c]A controversy exists over the role of each in the invention. The argument for Vail being the original inventor is laid out by several scholars.[6][7][8][d][9][e] The argument offered by supporters of Morse claims that Morse originally devised a cipher code similar to that used in existingsemaphore linetelegraphs, by which words were assigned three- or four-digit numbers and entered into a code book. The sending operator converted words to these number groups and the receiving operator converted them back to words through the same code book. Morse spent several months compiling this code dictionary. It is said by Morse supporters that Vail, in public and private writings, never claimed the code for himself. According to one researcher, in a February 1838 letter to his father, Judge Stephen Vail, Alfred wrote,[5] Professor Morse has invented a new plan of an alphabet, and has thrown aside the Dictionaries. In an 1845 book Vail wrote describing Morse's telegraph, he also attributed the code to Morse.[10]He died in 1859 at the age of 51.[11] A U.S. Army base was named in Vail's honor: Camp Vail inEatontown, New Jersey, later temporarily renamedFort Monmouth, was an Army housing complex. After World War II the families of servicemen and civilian Army employees negotiated with the Army to purchase the development, which they incorporated as the "Alfred Vail Mutual Association". Due to the diligent efforts of the town clerk, the rights of the charter of the originalShrewsbury Township(est. 1693) were transferred to the residents. The housing development exists under that name to this day. An elementary school inMorristown, New Jersey, near the site of the Speedwell Iron Works, is named Alfred Vail Elementary School in his honor.
https://en.wikipedia.org/wiki/Alfred_Vail
The CW Operators' Club, commonly known asCWops, is an international organization, in membership and management, foramateur radiooperators who enjoy communicating usingMorse Code. Its mission is to foster the use ofCW, whether forcontesting,DXing,traffic handling, or engaging in conversations.[1]A CWops nominee must be capable of sending and receiving International Morse Code at no less than 25 words per minute using the English language and submit dues.[2]CWops is an activity-based organization that sponsors many events. The CWops are dedicated to promoting goodwill and education to amateur radio operators throughout the world. Many members are notable contesters, DXers, andQRQ(high speed) Morse Code operators.[1] CWis an abbreviation for Continuous Wave, describing the mode in which Morse code is most often transmitted. A transmitter is simply keyed on and off, and the presence or absence of carrier is decoded in the receiver as the presence or absence of a tone.
https://en.wikipedia.org/wiki/The_CW_Operators%27_Club
Guglielmo Giovanni Maria Marconi, 1st Marquess of Marconi(/mɑːrˈkoʊni/;Italian:[ɡuʎˈʎɛlmodʒoˈvannimaˈriːamarˈkoːni]; 25 April 1874 – 20 July 1937) was an Italian[1][2][3][4]electrical engineer, inventor, and politician known for his creation of a practicalradio wave-basedwireless telegraphsystem.[5]This led to Marconi being credited as the inventor ofradio[6]and sharing the 1909Nobel Prize in PhysicswithKarl Ferdinand Braun"in recognition of their contributions to the development of wireless telegraphy".[7][8][9]His work laid the foundation for the development of radio,television, and all modernwirelesscommunication systems.[10] Marconi was also an entrepreneur and businessman who founded the Wireless Telegraph & Signal Company (which became theMarconi Company) in theUnited Kingdomin 1897. In 1929, Marconi was ennobled as amarquess(marchese) byVictor Emmanuel III. In 1931, he set upVatican RadioforPope Pius XI. Guglielmo Giovanni Maria Marconi[11][12]was born inPalazzo MarescalchiinBolognaon 25 April 1874, the second son of Giuseppe Marconi (anItalianaristocratic landowner fromPorretta Termewho lived in the countryside ofPontecchio) and hisIrishwife Annie Jameson (daughter of Andrew Jameson of Daphne Castle inCounty Wexford, a land agent, and wife Margaret, daughter of James Cochrane of Glen Lodge,Sligo, and sister ofScottishnaturalistJames Sligo Jameson, and granddaughter of John Jameson ofDublin, theScottishfounder ofwhiskeydistillersJameson & Sons).[13][14]His father, who was a widower with a son, Luigi, married Jameson on 16 April 1864 inBoulogne-sur-Mer, France. Alfonso, Marconi's older brother, was born in 1865. Between the ages of two and six, Marconi and Alfonso lived with their mother in the English town ofBedford. Having an Irish mother helped explain Marconi's many activities inGreat Britain and Ireland. When Guglielmo was three years old, on 4 May 1877, Giuseppe Marconi decided to obtain British citizenship. Marconi could have thus also opted for British citizenship anytime, as both his parents had British citizenship.[15] Marconi did not attend school as a child and did not go on to formal higher education.[16][17][18]Instead, he learned chemistry, mathematics, and physics at home from a series of private tutors hired by his parents. His family hired additional tutors for Marconi in the winter when they would leave Bologna for the warmer climate ofTuscanyorFlorence.[18]Marconi noted an important mentor was professor Vincenzo Rosa, a high school physics teacher inLivorno.[19][17]Rosa taught the 17-year-old Marconi the basics of physical phenomena as well as new theories on electricity. At the age of 18 and back in Bologna, Marconi became acquainted withUniversity of BolognaphysicistAugusto Righi, who had done research onHeinrich Hertz's work. Righi permitted Marconi to attend lectures at the university and also to use the university's laboratory and library.[20][21] From youth, Marconi was interested in science and electricity. In the early 1890s, he began working on the idea of "wireless telegraphy" – i.e., the transmission of telegraph messages without connecting wires as used by theelectric telegraph. This was not a new idea; numerous investigators and inventors had been exploring wireless telegraph technologies and even building systems using electricconduction,electromagnetic inductionand optical (light) signalling for over 50 years, but none had proved technically and commercially successful. A relatively new development came fromHeinrich Hertz, who, in 1888, demonstrated that one could produce and detectelectromagnetic radiation, based on the work ofJames Clerk Maxwell. At the time, this radiation was commonly called "Hertzian" waves, and is now generally referred to asradio waves.[22] There was a great deal of interest in radio waves in the physics community, but this interest was in the scientific phenomenon, not in its potential as a communication method. Physicists generally looked on radio waves as an invisible form of light that could only travel along aline of sightpath, limiting its range to the visual horizon like existing forms of visual signalling.[23]Hertz's death in 1894 brought published reviews of his earlier discoveries including a demonstration on the transmission and detection of radio waves by the British physicistOliver Lodgeand an article about Hertz's work byAugusto Righi. Righi's article renewed Marconi's interest in developing a wireless telegraphy system based on radio waves,[24]a line of inquiry that Marconi noted other inventors did not seem to be pursuing.[25] At the age of 20, Marconi began to conduct experiments in radio waves, building much of his own equipment in the attic of his home at the Villa Griffone in Pontecchio (now an administrative subdivision ofSasso Marconi), Italy, with the help of his butler, Mignani. Marconi built on Hertz's original experiments and, at the suggestion of Righi, began using acoherer, an early detector based on the 1890 findings of French physicistÉdouard Branlyand used in Lodge's experiments, thatchanged resistancewhen exposed to radio waves.[26]In the summer of 1894, he built a storm alarm made up of a battery, a coherer, and an electric bell, which went off when it picked up the radio waves generated by lightning. Late one night, in December 1894, Marconi demonstrated a radio transmitter and receiver to his mother, a set-up that made a bell ring on the other side of the room by pushing a telegraphic button on a bench.[27][26]Supported by his father, Marconi continued to read through the literature and picked up on the ideas of physicists who were experimenting with radio waves. He developed devices, such as portable transmitters and receiver systems, that could work over long distances,[25]turning what was essentially a laboratory experiment into a useful communication system.[28]Marconi came up with a functional system with many components:[29] In the summer of 1895, Marconi moved his experiments outdoors on his father's estate in Bologna. He tried different arrangements and shapes of antenna but even with improvements he was able to transmit signals only up to one half-mile, a distance Oliver Lodge had predicted in 1894 as the maximum transmission distance for radio waves.[30] A breakthrough came in the summer of 1895, when Marconi found that a much greater range could be achieved after he raised the height of his antenna and, borrowing from a technique used in wired telegraphy,groundedhis transmitter and receiver. With these improvements, the system was capable of transmitting signals up to 2 miles (3.2 km) and over hills.[31][32]Themonopole antennareduced the frequency of the waves compared to thedipole antennasused by Hertz, and radiatedvertically polarizedradio waves which could travel longer distances. By this point, he concluded that a device could become capable of spanning greater distances, with additional funding and research, and would prove valuable both commercially and militarily. Marconi's experimental apparatus proved to be the first engineering-complete, commercially successfulradio transmissionsystem.[33][34][35] Marconi applied to the Italian Ministry of Post and Telegraphs, then under the direction of Maggiorino Ferraris,[36]explaining his wireless telegraph machine and asking for funding, but never received a response. An apocryphal tale claims that the minister (incorrectly named first as Emilio Sineo, later as Pietro Lacava[37]) wrote "to the Longara" on the document, referring to the insane asylum on Via della Lungara in Rome, but the letter was never found.[38] In 1896, Marconi spoke with his family friend Carlo Gardini, Honorary Consul at the United States Consulate in Bologna, about leaving Italy to go toGreat Britain. Gardini wrote a letter of introduction to the Ambassador of Italy in London, Annibale Ferrero, explaining who Marconi was and about his extraordinary discoveries. In his response, Ambassador Ferrero advised them not to reveal Marconi's results until after a patent was obtained. He also encouraged Marconi to come to Britain, where he believed it would be easier to find the necessary funds to convert his experiments into practical use. Finding little interest or appreciation for his work in Italy, Marconi travelled toLondonin early 1896 at the age of 21, accompanied by his mother, to seek support for his work. (He spoke fluent English in addition to Italian.) Marconi arrived atDover, and the Customs officer opened his case to find various apparatuses. The customs officer immediately contactedthe Admiraltyin London. With worries in the UK about Italian anarchists and suspicion Marconi was importing a bomb, his equipment was destroyed. While in the UK, Marconi gained the interest and support ofWilliam Preece, the Chief Electrical Engineer of theGeneral Post Office(the GPO). Marconi applied for a patent on 2 June 1896. British Patent number 12039 titled "Improvements in Transmitting Electrical impulses and Signals, and in Apparatus therefor", which became the first patent for a communication system based on radio waves.[39] Marconi made the first demonstration of his system for the British government in July 1896.[40]A further series of demonstrations for the British followed, and, by March 1897, Marconi had transmitted Morse code signals over a distance of about 3 miles (5 km) acrossSalisbury Plain. On 13 May 1897, Marconi sent the first ever wireless communication over the open sea – a message was transmitted over theBristol ChannelfromFlat HolmIsland toLavernock PointnearCardiff, a distance of 3 miles (4.8 km). The message read "Are you ready".[41]The transmitting equipment was almost immediately relocated toBrean Down Forton theSomersetcoast, stretching the range to 10 miles (16 km). Impressed by these and other demonstrations, Preece introduced Marconi's ongoing work to the general public at two important London lectures: "Telegraphy without Wires", at theToynbee Hallon 11 December 1896; and "Signalling through Space without Wires", given to theRoyal Institutionon 4 June 1897.[42][43] Numerous additional demonstrations followed, and Marconi began to receive international attention. In July 1897, he carried out a series of tests atLa Spezia, in his home country, for the Italian government. A test forLloyd'sbetween The Marine Hotel inBallycastleandRathlin Island, both inCounty AntriminUlster,Ireland, was conducted on 6 July 1898 byGeorge KempandEdward Edwin Glanville.[44]A transmission across theEnglish Channelwas accomplished on 27 March 1899, fromWimereux, France toSouth Foreland Lighthouse, England. Marconi set up an experimental base at theHaven Hotel,Sandbanks,Poole Harbour,Dorset, where he erected a 100-foot high mast. He became friends with the van Raaltes, the owners ofBrownsea Islandin Poole Harbour, and his steam yacht, theElettra, was often moored on Brownsea or at The Haven Hotel. Marconi purchased the vessel after the Great War and converted it to a seaborne laboratory from where he conducted many of his experiments. Among theElettra's crew wasAdelmo Landini, his personal radio operator, who was also an inventor.[45] In December 1898, the British lightship service authorised the establishment of wireless communication between theSouth Forelandlighthouse atDoverand the East Goodwinlightship, twelve miles distant. On 17 March 1899, the East Goodwin lightship sent the firstwireless distress signal, a signal on behalf of the merchant vesselElbewhich had run aground onGoodwin Sands. The message was received by the radio operator of the South Foreland lighthouse, who summoned the aid of theRamsgatelifeboat.[46][47] In 1899, Marconi sailed to the United States at the invitation ofThe New York Heraldnewspaper to coverthat year's America's Cupinternational yacht races offSandy Hook, New Jersey. His first demonstration was a transmission from aboard the SSPonce, a passenger ship of thePorto Rico Line.[48]Marconi left forEnglandon 8 November 1899 on theAmerican Line'sSSSaint Paul, and he and his assistants installed wireless equipment aboard during the voyage. Marconi's wireless brought news of theSecond Boer War, which had begun a month before their departure, to passengers at the request of "some of the officials of the American line."[49]On 15 November theSS Saint Paulbecame the first ocean liner to report her imminent return to Great Britain by wireless when Marconi's Royal Needles Hotel radio station contacted her 66 nautical miles off the English coast. The firstTransatlantic Times, a newspaper containing wireless transmission news from the Needles Station at the Isle of Wight, was published on board the SSSaint Paulbefore its arrival.[50] At the turn of the 20th century, Marconi began investigating a means to signal across the Atlantic to compete with thetransatlantic telegraph cables. Marconi established a wireless transmitting station at Marconi House,Rosslare Strand,County Wexford, in 1901 to act as a link betweenPoldhuinCornwall, England, andClifdeninConnemara,County Galway, Ireland. He soon made the announcement that the message was received atSignal HillinSt. John's,Newfoundland(now part ofCanada), on 12 December 1901, using a 500-foot (150 m) kite-supported antenna for reception – signals transmitted by the company's new high-power station atPoldhu, Cornwall. The distance between the two points was about 2,200 miles (3,500 km). It was heralded as a great scientific advance, yet there also was – and continues to be – considerable scepticism about this claim. The exact wavelength used is not known, but it is fairly reliably determined to have been in the neighbourhood of 350 metres (frequency ≈ 850 kHz). The tests took place at a time of day during which the entire transatlantic path was in daylight. It is now known (although Marconi did not know then) that this was the worst possible choice. At this medium wavelength, long-distance transmission in the daytime is not possible because of the heavy absorption of the skywave in the ionosphere. It was not a blind test; Marconi knew in advance to listen for a repetitive signal of three clicks, signifying the Morse code letterS. The clicks were reported to have been heard faintly and sporadically. There was no independent confirmation of the reported reception, and the transmissions were difficult to distinguish from atmospheric noise. A detailed technical review of Marconi's early transatlantic work appears in John S. Belrose's work of 1995. The Poldhu transmitter was a two-stage circuit.[52][53] Feeling challenged by sceptics, Marconi prepared a better-organised and documented test. In February 1902, the SSPhiladelphiasailed west from Great Britain with Marconi aboard, carefully recording signals sent daily from the Poldhu station. The test results producedcoherer-tapereception up to 1,550 miles (2,490 km), and audio reception up to 2,100 miles (3,400 km). The maximum distances were achieved at night, and these tests were the first to show that radio signals formedium waveandlongwavetransmissions travel much farther at night than during the day. During the daytime, signals had been received up to only about 700 miles (1,100 km), less than half of the distance claimed earlier at Newfoundland, where the transmissions had also taken place during the day. Because of this, Marconi had not fully confirmed the Newfoundland claims, although he did prove that radio signals could be sent for hundreds of kilometres (miles), despite some scientists' belief that they were limited essentially to line-of-sight distances. On 17 December 1902, a transmission from the Marconi station inGlace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1901, Marconi built a station nearSouth Wellfleet, Massachusetts, that sent a message of greetings on 18 January 1903 from United States PresidentTheodore Rooseveltto KingEdward VIIof the United Kingdom. However, consistent transatlantic signalling was difficult to establish.[54] Marconi began to build high-powered stations on both sides of the Atlantic to communicate with ships at sea, in competition with other inventors. In 1904, he established a commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907[55][56]betweenClifden, Ireland, andGlace Bay, but even after this the company struggled for many years to provide reliable communication to others. The role played by Marconi Co. wireless in maritime rescues raised public awareness of the value of radio and brought fame to Marconi, particularly the sinking ofRMSTitanicon 15 April 1912 andRMSLusitaniaon 7 May 1915.[57] RMSTitanicradio operatorsJack PhillipsandHarold Bridewere not employed by theWhite Star Linebut by theMarconi International Marine Communication Company. After the sinking of the ocean liner, survivors were rescued by theRMSCarpathiaof theCunard Line.[58]Carpathiatook a total of 17 minutes to both receive and decode the SOS signal sent byTitanic. There was a distance of 58 miles between the two ships.[59]WhenCarpathiadocked in New York, Marconi went aboard with a reporter fromThe New York Timesto talk with Bride, the surviving operator.[58]After this incident, Marconi gained popularity and became more recognised for his contributions to the field of radio and wireless technology.[60] On 18 June 1912, Marconi gave evidence to the Court of Inquiry into the loss ofTitanicregarding the marine telegraphy's functions and the procedures for emergencies at sea.[61]Britain'sPostmaster-Generalsummed up, referring to theTitanicdisaster: "Those who have been saved, have been saved through one man, Mr. Marconi ... and his marvellous invention."[62]Marconi was offered free passage onTitanicbefore she sank, but had takenLusitaniathree days earlier. As his daughter Degna later explained, he had paperwork to do and preferred the public stenographer aboard that vessel.[63] Over the years, the Marconi companies gained a reputation for being technically conservative, in particular by continuing to use inefficient spark-transmitter technology, which could be used only for radio-telegraph operations, long after it was apparent that the future of radio communication lay withcontinuous-wavetransmissions which were more efficient and could be used for audio transmissions. Somewhat belatedly, the company did begin significant work with continuous-wave equipment beginning in 1915, after the introduction of the oscillating vacuum tube (valve). TheNew Street Worksfactory inChelmsfordwas the location for the first entertainment radiobroadcastsin theUnited Kingdomin 1920, employing a vacuum tube transmitter and featuringDame Nellie Melba. In 1922, regular entertainment broadcasts commenced from theMarconi Research CentreatGreat Baddow, forming the prelude to theBBC, and he spoke of the close association of aviation and wireless telephony in that same year at a private gathering withFlorence Tyzack Parbury, and even spoke of interplanetary wireless communication. In 1924, the Marconi Company co-established theUnione Radiofonica Italiana(nowRAI).[64] Have I done the world good, or have I added a menace?[65] In 1914, Marconi was made a Senator in theSenate of the Kingdom of Italyand appointed Honorary Knight Grand Cross of theRoyal Victorian Orderin the UK. DuringWorld War I, Italy joined the Allied side of the conflict, and Marconi was placed in charge of the Italian military's radio service. He attained the rank of lieutenant in theItalian Royal Armyand of commander in theRegia Marina. In 1929, he was made amarquessby KingVictor Emmanuel III.[66] While helping to develop microwave technology, theMarcheseMarconi suffered nineheart attacksin the span of three years preceding his death.[67]Marconi died in Rome on 20 July 1937 at age 63, following the ninth, fatal, heart attack, and Italy held astate funeralfor him. As a tribute, shops on the street where he lived were "Closed for national mourning".[68]In addition, at 6 pm the next day, the time designated for the funeral, transmitters around the world observed two minutes of silence in his honour.[69]The British Post Office also sent a message requesting that all broadcasting ships honour Marconi with two minutes of broadcasting silence.[68]His remains are housed in theMausoleum of Guglielmo Marconiin the grounds of Villa Griffone atSasso Marconi, Emilia-Romagna, which assumed that name in his honour in 1938.[70] In 1943, Marconi's elegant sailing yacht, theElettra, was commandeered and refitted as a warship by the German Navy. She was sunk by theRAFon 22 January 1944. After the war, the Italian Government tried to retrieve the wreckage, to rebuild the boat, and the wreckage was removed to Italy. Eventually, the idea was abandoned, and the wreckage was cut into pieces which were distributed amongst Italian museums. In 1943, theSupreme Court of the United Stateshanded down a decision on Marconi's radio patents restoring some of the prior patents ofOliver Lodge,John Stone Stone, andNikola Tesla.[71][72]The decision was not about Marconi's original radio patents[73]and the court declared that their decision had no bearing on Marconi's claim as the first to achieve radio transmission, just that since Marconi's claim to certain patents was questionable, he could not claim infringement on those same patents.[74]There are claims the high court was trying to nullify a World War I claim against the United States government by the Marconi Company via simply restoring the non-Marconi prior patent.[71] Marconi was a friend of Charles van Raalte and his wife Florence, the owners ofBrownsea Island; and of Margherita, their daughter, and in 1904 he met herIrishfriend,The Hon.Beatrice O'Brien (1882–1976), a daughter ofThe 14th Baron Inchiquin. On 16 March 1905, Beatrice O'Brien and Marconi were married, and spent their honeymoon on Brownsea Island.[75]They had three daughters, Lucia (born and died 1906), Degna (1908–1998), and Gioia (1916–1996), and a son, Giulio, 2nd Marquess Marconi (1910–1971). In 1913, the Marconi family returned to Italy and became part of Rome society. Beatrice served as a lady-in-waiting toQueen Elena. At Marconi's request, his marriage to Beatrice was annulled on 27 April 1927, so he could remarry.[76]Marconi and Beatrice had divorced on 12 February 1924 in the free city ofFiume(Rijeka).[12] Marconi went on to marryMaria Cristina Bezzi-Scali[it](2 April 1900 – 15 July 1994), the only daughter of Francesco,CountBezzi-Scali. To do this he had to be confirmed in theCatholicfaith and became a devout member of the Church.[77]He was baptised Catholic but had been brought up as a member of theAnglican Church. On 12 June 1927, Marconi married Maria Cristina in a civil service, with a religious ceremony performed on 15 June. Marconi was 53 years old and Maria Cristina was 26. They had one daughter, Maria Elettra Elena Anna (born 1930), who marriedPrinceCarlo Giovannelli (1942–2016) in 1966; they later divorced. For unexplained reasons, Marconi left his entire fortune to his second wife and their only child, and nothing to the children of his first marriage.[78] Marconi wanted to personally introduce in 1931 the first radio broadcast of a Pope,Pius XI, and announced at the microphone: "With the help of God, who places so many mysterious forces of nature at man's disposal, I have been able to prepare this instrument which will give to the faithful of the entire world the joy of listening to the voice of the Holy Father".[79] Marconi joined theNational Fascist Partyin 1923.[80]In 1930, Italian dictatorBenito Mussoliniappointed him President of theRoyal Academy of Italy, which made Marconi a member of theFascist Grand Council. Marconi was anapologistforfascist ideologyand actions such as the Italian invasion of Ethiopia in theSecond Italo-Abyssinian War.[81] In his lecture he stated: "I reclaim the honour of being the first fascist in the field of radiotelegraphy, the first who acknowledged the utility of joining the electric rays in a bundle, as Mussolini was the first in the political field who acknowledged the necessity of merging all the healthy energies of the country into a bundle, for the greater greatness of Italy".[82]Not one Jew was allowed to join the Royal Academy during Marconi's tenure as president from 1930, three years beforeAdolf Hitlertook power in Germany and eight years beforeBenito Mussolini's race laws brought his regime'santisemitisminto the open.[83] Places and organisations named after Marconi include: The asteroid1332 Marconiais named in his honour. A largecrateron the far side of theMoonis also named after him. The Marconi Wireless Company of America, the world's first radio company, was incorporated in Roselle Park New Jersey, on West Westfield Avenue, on 22 November 1899.
https://en.wikipedia.org/wiki/Guglielmo_Marconi
Inamateur radio,high-speed telegraphy(HST) is a form ofradiosportthat challengesamateur radio operatorsto accurately receive and copy, and in some competitions to send,Morse codetransmissions sent at very high speeds. This event is most popular inEastern Europe. TheInternational Amateur Radio Union(IARU) sponsors most of the international competitions. The first international high-speed telegraphy competition was the HST European Championship held inMoscow, Russia, in 1983. Two more HST European Championships were held; one in 1989 inHannover, Germany, and another in 1991 inNeerpelt, Belgium. The first HST World Championship was held inSiófok, Hungary, in 1995. A world championship has been held in every odd-numbered year since then. Most international, national, and local HST competitions are held in the countries of the formerEastern Bloc. Every world championship has been held in Europe. While many competitors are licensedamateur radio operators, there is no requirement that competitors have an amateur radio license, and many pursue the sport without one. There are three main competitive events at HST meets. One standard event is the copying or sending of five-character groups of text. Two of the events are based on simulations of amateur radio activity and are referred to as theRadioamateur Practicing Tests (RPT). The RPT includes the copying ofamateur radiocall signsand a "pileup" competitions, where competitors must distinguish between call signs sent during several simultaneous transmissions. Not all competitors are required to enter every competition, and some competitors specialize in just one competitive event.[1] In thefive character groupsevent, random letters and numbers are sent in Morse code, five characters at a time, at a high speed. Separate competitions are held for the reception of just the twenty-six letters of theLatin alphabet, just the tenArabic numerals, or a mixed content of letters, numbers, and somepunctuationsymbols. Competitors may choose to record the text by hand on paper or by typing on acomputer keyboard. The competition starts with one minute of transmission sent at an initial speed defined for the entry category (usually 50 letters per minute for juniors and 80 letters per minute for the other age categories). After each test, the copy of the competitors is judged for errors. Subsequent tests are each conducted at an increased speed until no competitor remains who can copy the text without excessive error.[1] In addition to reception tests, some competitions feature transmission tests where competitors must try to send five character groups in Morse code as fast as possible. Competitors send a printed message of five character groups at a specific speed, which is judged for its accuracy by a panel of referees. Like the receiving tests, there are separate competitions for sending five character groups of just the twenty-six letter of the Latin alphabet, just the ten Arabic numerals, or a mixed content of letters, numbers, and some punctuation symbols. Most transmission tests restrict the type of equipment that may be used to send the Morse code message.[1] TheAmateur Radio Call Sign Receiving Testuse asoftware programcalled RufzXP that generates a score for each competitor.Rufzis the abbreviation of theGermanword "Rufzeichen-Hören" which means "Listening to Call Signs". In the RufzXP program, competitors listen to an amateur radio call sign sent in Morse code and must enter that call sign with the computer keyboard. If the competitor types in the call sign correctly, their score improves and the speed at which the program sends subsequent call signs increases. If the competitor types in the call sign incorrectly, the score is penalized and the speed decreases. Only one call sign is sent at a time and the event continues for a fixed number of call signs (usually 50). Competitors can choose the initial speed at which the program sends the Morse code, and the winner is the competitor with the highest generated score.[1] ThePileup Trainer Testsimulates a "pileup" situation in on-air amateur radio operating where numerous stations are attempting to establish two-way contact with one particular station at the same time. This competition uses a software program calledMorseRunner. In the MorseRunner software, more than one amateur radio call sign is sent at a time. Each call sign is sent in Morse code generated at differentaudio frequenciesand speeds, timed to overlap each other. Competitors must record as many of the call signs as they can during a fixed period of time. They may choose to do this either by recording the call signs by hand on paper, or by typing them in with a computer keyboard. The winner is the competitor with the most correctly recorded call signs.[1] The rules of international and European championships are defined in the documentIARU Region 1 Rules for High Speed Telegraphy Championships. HST competitions generally separate the competitors into different categories based on age and gender. The following are the entry categories specified in the IARU rules used for European and World Championships: Note that there is an additional male category, which is justified by the high number of participants in the corresponding age group. A maximum of 18 competitors from those 9 categories can take part as a national team. IARU World Championships take place in odd year, starting 1995. Since 2004, an IARU Region 1 Championship takes place each even year. 13th IARU World HST Championship was held in Herceg-Novi, Montenegro from 21 till 25 September 2016. Competing in 9 categories with 8 types of tests, there was more than 120 competitors from 21 countries around the world. The IARU Region 1 HST working group maintains a list of HST world records,[7]set at official IARU HST competitions. Top speeds vary strongly between the different events of the competition and categories. While reception and transmission of letter groups are limited to approximately 300 characters per minute, mainly due to physiologic difficulties in sending or writing at high speeds respectively, the maximum speeds in the RufzXP competition are more than twice as fast. Note that the system to measure the telegraphy speed at IARU HST events has changed. Before 2004, thePARISstandard was used, which has since been changed toreal characters. Old records have been recalculated accordingly. The sum of all team scores of the top ten nations from all HST events since 1999 are tabulated below. Note that some teams did not take part in all competitions. Last updated in September 2009 after the World Championships in Obzor, Bulgaria.
https://en.wikipedia.org/wiki/High-speed_telegraphy
Hog-Morsewas telegraphers' jargon for the tendency of inexperiencedtelegraphoperators to make errors when sending or receiving inMorse code. The term was current in the United States during the period whenAmerican Morse codewas still in use. It is so called after one example (here given inInternational Morsebut most likely originating in American Morse): becomes with just one subtle error in timing. The now-defunctAmerican Morse("railroad code") is different from theInternational Morse Codecurrently in use forradio telegraphy. With American Morse it was far more difficult to avoid timing errors, because there were more symbol timings than there are in International Morse and some were difficult to distinguish because of their closeness; International Code has only two symbols, dots (▄) and dashes (▄▄▄), but the American code had three lengths of dash and two lengths of spaces between dots. For example, the dashes used for "L" (▄▄▄▄) and "T" (▄▄) inAmerican Morseare distinct. Also, inInternational Morsethe space between symbols within a character is always the same, but American Morse has two different spaces. For example, the letters "S" (▄ ▄ ▄), "C" (▄ ▄  ▄), and "R" (▄  ▄ ▄) all consist of three dots, but with slightly different timing between the dots in each case.[1][2] A frequently quoted, but possibly apocryphal, story from the historical period concerns the similarity ofL(▄▄▄▄) andT(▄▄) in the American code. A company inRichmond, Virginiareceived a request for quotation for a load ofUNDRESSED STAVES(rough sawn wood intended for the manufacture of barrels), but the telegraph operator had sent instead of thus sending an order forUNDRESSED SLAVES. The company replied reminding the customer that slavery had been abolished.[3] Another American Morse example given in the literature isPLEASE FILL ME INbecoming6NAZ FIMME Q.[4]One commentator has called this the 19th centuryautocorrect.[5]
https://en.wikipedia.org/wiki/Hog-Morse
TheInstructographwas apaper tape-based machine used for the study ofMorse code. The paper tape mechanism consisted of two reels which passed a paper tape across a reading device that actuated a set of contacts which changed state dependent on the presence or absence of hole punches in the tape. The contacts could operate an audio oscillator for the study ofInternational Morse Code(used by radio), or a sounder for the study ofAmerican Morse Code(used by railroads), or a light bulb (Aldis Lamp- used by Navy ship to ship or byHeliograph). The Instructograph was in production from about 1920 through 1983. The first US patent, No. 1,725,145, was granted to Otto Bernard Kirkpatrick, of Chicago, IL, on August 20, 1929. Most of them would be wound by hand or be plugged into a wall outlet. Most plugin outlet based instructographs would have a set of knobs that can control the speed and volume. The latest version of the Instructograph was the model 500 which included a built in solid state oscillator. This model was available to be purchased as new through at least 1986.
https://en.wikipedia.org/wiki/Instructograph
Akeyeris an electronic device used for signaling by hand, by way of pressing one or more switches.[1]The technical termkeyerhas two very similar meanings, which are nonetheless distinct: One fortelegraphyand the other for accessorydevices built for computer-human communication: Intelegraphy, so-callediambickeysdeveloped out of an earlier generation of novel side-to-side, double-contact keys (called "bushwhackers") and later, mechanical semi-automatic keys (called "bugs"). Semi-automatic keys were an innovation that had an impulse driven, horizontal pendulum mechanism that (only) created a series of correctly timed"dits". The pendulum would repeatedly tap a switch contact for as long as its control lever was held to the right (or until the impulse from the thumb push was exhausted); telegraphers were obliged to time the"dahs"themselves, by pressing the lever to the left with their knuckle, one press per "dah". When the lever is released, springs push it back to center, break the switch contact, and halt and reset the pendulum. Because the "dits" are created automatically by the pendulum mechanism, but the "dahs" are keyed the old-fashioned way, the keys are called "semi-automatic". (Modern electronic keyers create both the "dits" and the "dahs" automatically, as long as one of the switches is in contact, and are called "fully-automatic".) More than just convenience, the keys were needed for medical reasons: Telegraphers would often develop a form ofrepetitive stress injury, which at that time was called "glass arm" by telegraphers, or "telegraphers’ paralysis" in medical literature. It was common and was caused by forcefully "pounding brass" up-and-down on conventional telegraph keys. Keys built for side-to-side motion would neither cause nor aggravate the injury, and allowed disabled telegraphers to continue in their profession. With the advent ofsolid stateelectronics, the convenience offullyautomatic keying became possible by simulating and extending the operation of the old mechanical keys, and special-purpose side-to-side keys were made to operate the electronics, callediambictelegraph keys after the rhythm of telegraphy. Iniambictelegraphy the "dot" and the "dash" are separate switches, activated either by one lever or by two separate levers. For as long as the telegrapher holds the lever(s) to the left or right, one of the two telegraph key switches is in contact; the electronics in the keyer will respond by sending a continuous stream of "dits" or "dahs". The operator sends codes by choosing the direction the lever is held, and how long it is held on each side. If the operator swings from side to sideslightlyerratically, within some limits the electronics will none-the-less produce perfectly timed codes.[citation needed] Modern keys with only one lever, which swings horizontally between two contacts and returns to center when released, are called "single paddle keys"; they are also called "bushwhackers" or "sideswipers", the same as the old-style double-contact single-lever telegraph keys. The double-lever keys are called "iambic keys", "double-paddle keys", or "squeeze keys". The name "squeeze key" is because both levers can be pressed at the same time, in which case the electronics then produces a string of "dit-dah-dit-dah-..." (iambicrhythm, hence "iambic key"), or "dah-dit-dah-dit-..." (trochaicrhythm), depending on which side makes contact first. Both types of keys have two distinct contacts, and are wired to the same type (stereo) plug, and can operate the same electronic keyer (for any commercial keyer made in the last 40 years or so) which itself connects into the same (monophonic) jack on a radio that one of the old-fashioned type telegraph keys plug into.[citation needed] Fully automatic electronic keying became popular during the 1960s; at present most Morse code is sent via electronic keyers, although some enthusiasts prefer to use a traditional up-and-down "straight key". Historically appropriate old-fashioned keys are used by naval museums for public demonstrations and for re-enactingcivil warevents, as well as special radio contests arranged to promote the use of old-style telegraph keys. Because of the popularity of iambic keys, most transmitters introduced into the market within the current century have keyer electronics built-in, so an iambic key plugs directly into a modern transmitter, which also functions as the electronic keyer. (Many modern electronic keyers and nearly all radio built-in keyers electrically detect whether the inserted plug belongs to an old type, single-switch key [monophonic plug], or a new type, double-switch key [stereo plug], and respond appropriately for the key that is wired to the inserted plug.)[citation needed] In a completely automatedteleprinterorteletype(RTTY) system, the sender presses keys on a typewriter-style keyboard to send acharacterdata stream to a receiver, and computation alleviates the need for timing to be done by the human operator. In this way, much highertypingspeeds are possible. This is an early instance of a multi-key user-input device, as are computer keyboards (which, incidentally, are what one uses for modernRTTY). The keyers discussed below are a smaller, more portable form of user input device. Modern computer interfacekeyerstypically have a large number of switches but not as many as a full-sizekeyboard; typically between four and fifty.[1]A keyer differs from a keyboard in the sense that it lacks a traditional "board"; the keys are arranged in a cluster[1]which is often held in the hand. An example of a very simple keyer is a singletelegraph key, used forsendingMorse code. In such a use, the term "to key" typically means to turn on and off acarrier wave. For example, it is said that one "keys the transmitter" by connecting some low-power stage of theamplifierin atransmitterto its follow-on stage, through the telegraph key. When this concept of aniambic telegraph keywas introduced to inventorSteve Mannin the 1970s, he mistakenly heardiambicasbiambic. He then generalized the nomenclature to include various "polyambic" or "multiambic" keyers, such as a "pentambic" keyer (five keys, one for each finger and the thumb), and "septambic" (four finger and three thumb buttons on a handgrip). These systems were developed primarily for use in early, experimental forms ofwearable computing, and have also been adapted tocyclingwith aheads-up displayin projects like BEHEMOTH bySteven K. Roberts. Mann (who primarily works incomputational photography) later utilized the concept in a portablebackpack-based computer and imaging system, WearCam, which he invented for photographic light vectoring.[2] Computer interface keyers are typically one-handed grips, often used in conjunction with wearable computers.[citation needed]Unlike keyboards, the wearable keyerhas no board upon which the switches are mounted. Additionally, by providing some other function – such as simultaneous grip of flash light and keying – the keyer is effectively hands-free, in the sense that one would still be holding the light source anyway. Chorded or chording keyboardshave also been developed, and are intended to be used while seated having multiple keys mounted to a board rather than a portable grip. One type of these, the so-calledhalf-QWERTYlayout, uses only minimal chording, requiring the space bar to be pressed down if the alternate hand is used. It is otherwise a standardQWERTYkeyboard of full size. It, and many other innovations[example needed]in keyboard controls, were designed to deal with handdisabilitiesin particular.[citation needed]
https://en.wikipedia.org/wiki/Keyer
Morse codeabbreviationsare used to speed up Morse communications by foreshortening textual words and phrases. Morse abbreviations are short forms, representing normal textual words and phrases formed from some (fewer)characterstaken from the word or phrase being abbreviated. Many are typicalEnglishabbreviations, or shortacronymsfor often-used phrases. Morse code abbreviations are not the same asprosigns. Morse abbreviations are composed of (normal) textual alpha-numeric character symbols with normal Morse code inter-character spacing; the character symbols in abbreviations, unlike the delineated character groups representing Morse code prosigns, are not "run together" orconcatenatedin the way most prosigns are formed. Although a few abbreviations (such asSXfor "dollar") are carried over from formercommercial telegraph codes, almost all Morse abbreviations arenotcommercial codes. From 1845 until well into the second half of the 20th century,commercial telegraphic codebooks were used to shorten telegrams, e.g.PASCOELA= "Locals have plundered everything from the wreck."[1]However, these cyphers are typically "fake" words six characters long, or more, used for replacing commonly used whole phrases, and are distinct from single-word abbreviations. The following Table of Morse code abbreviations and further references toBrevity codessuch as92 Code,Q code,Z code, andR-S-T systemserve to facilitate fast and efficient Morse code communications. To make Morse code communications faster and more efficient, there are many internationally agreed patterns or conventions of communication which include: extensive use of abbreviations, use ofbrevity codessuch as92 Code,RST code,Q code,Z codeas well as the use ofMorse prosigns. The skills required to have efficient fast conversations with Morse comprise more than simply knowing the Morse code symbols for the alphabet and numerals. Skilled telegraphists must also know many traditional International Morse code communications conventions. In the following example of a typical casual Morse code conversation between two stations there is extensive use of such: Morse code abbreviations, brevity codes,Morse procedural signs, and other such conventions. An example casual Morse code (CW) conversation between Station S1ABC and Station S2YZ is illustrated in the following paragraphs. Here the actual Morse code information stream sent by each station (S1ABC and S2YZ) is shown in bold face small capitals type, and is followed below each bold face transmission by an indentedinterpretationof the message sent, together with short explanations of the codes. These translations[5]and explanations are shown below each station's indicated transmission data stream. S1ABC transmits an open call in Morse:CQ CQ CQ DE S1ABCRNK S2YZ responds to the call by transmitting the short Morse reply:S1ABC DE S2YZKN S1ABC transmits Morse message:S2YZ DE S1ABC = GA DR OM UR RST 5NN HR = QTH ALMERIA = OP IS JOHN = HW? S2YZ DE S1ABCKN S2YZ transmits Morse message:S1ABC DE S2YZ = TNX FB RPRT DR OM JOHN UR 559 = QTH BARCELONA = NM IS ANDY S1ABC DE S2YZKN S1ABC transmits Morse message:S2 DE S1ABC = OK TNX QSO DR ANDY = 73 ES HPE CUAGN S2YZ DE S1ABCKN S2YZ sends Morse message:S1ABC DE S2YZ = R TU CUAGN 73 S1ABC DE S2YZRNSK In International Morse code there is nodistinctdot-dash sequence defined only for the mathematical equal sign [=]; rather the same code (▄▄▄ ▄ ▄ ▄ ▄▄▄ordah di di di dah) is shared bydouble hyphen[=] and theprocedural signforsection separatornotated asBT. It is fairly common in theRecommended International Morse Codefor punctuation codes to be shared with prosigns. For example, the code for plus or cross ([+] =▄ ▄▄▄ ▄ ▄▄▄ ▄) is the same as the prosign forend of telegram, and the widely used but non-ITU"Over to you only" prosignKNis the official code for open parentheses [(] orleft bracket.[4] The listener is required to distinguish the meaning by context. In the example casual conversation between two station operators, above, the Morse transmissions show the equal sign [=] in the same way that a simple electronic automatic Morse code reader with a one- or two-line display does: It can't distinguish context so it always displays the math symbol. It would also display an open parenthesis [(] for theover to you onlyprosign (KN=▄▄▄ ▄ ▄▄▄ ▄▄▄ ▄). The use of theend of sectionprosignBTin casual exchanges essentially indicates a new paragraph in the text or a new sentence, and is a little quicker to send than afull stop([.] =▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄) required in telegrams. Normally an operator copying Morse code by hand or typewriter would decide whether the equal sign [=] or the "new section" prosignBTwas meant and startnew paragraphin the recorded text upon reception of the code. This new paragraph copying convention is illustrated in the example conversation in the prior section. When decoding in one's head, instead of writing text on paper or into a computer file, the receiving operator copying mentally will interpret theBTprosign for either a mental pause, or to jot down for later reference a short word or phrase from the information being sent. Rag cheweris a name applied to amateur radio Morse code operators who engage in informal Morse code conversations (known aschewing the rag) while discussing subjects such as: The weather, their location, signal quality, and their equipment (especially the antennas being used). Meaningful rag chewing between fluent Morse code operators having different native languages is possible because of a common language provided by theprosigns for Morse code, the InternationalQ code,Z code,RST code, the telegraph eraPhillips Codeand92 codes, and many well known Morse code abbreviations including those discussed in this article. Together all of these traditional conventions serve as a somewhat cryptic but commonly understood language (Lingua Franca) within the worldwide community of amateur radio Morse code operators. These codes and protocols efficiently encode many well known statements and questions from many languages into short simple character groups which may be tapped out very quickly. The internationalQ codefor instance encodes literally hundreds of full normal language sentences and questions in short three character codes each beginning with the characterQ. For example, the code wordQTH...meansMy transmitting location is... , which radio operators typically take instead to meanMy home is... . If this code word is followed by a question mark asQTH?it meansWhat is your transmitting location? Typically very few full words will be spelled out in Morse code conversations. Similar to phonetexting, vowels are often left out to shorten transmissions and turn overs. Other examples, of internationally recognized uses of Morse code abbreviations and well known code numbers, such as those of thePhillips Codefrom past eras of telegraph technology, are abbreviations such asWXfor weather andSXfor dollar, and fromwire signalcodes, the numbers73forbest regardsand88forlove and kisses. These techniques are similar to, and often faster than, texting on modern cellphones. Using this extensivelingua francathat is widely understood across many languages and cultures, surprisingly meaningful Morse code conversations can be efficiently conducted with short transmissions independently of native languages, even between operators who cannot actually communicate by voice because of language barriers. With heavy use of theQ codeand Morse code abbreviations, surprisingly meaningful conversations can readily occur. Note that in the preceding example conversation very few full English words have been used. In fact, in the above example S1 and S2 might not speak the same native language. Although lengthy or detailed conversations could not, of course, be accomplished by radio operators with no common language. Contestersoften use a very specialized and even shorter format for their contacts. Their purpose is to process as many contacts as possible in a limited time (e.g. 100–150 contacts per hour).
https://en.wikipedia.org/wiki/Morse_code_abbreviations
Morse codemnemonicsare systems to represent the sound of Morse characters in a way intended to be easy to remember. Since every one of these mnemonics requires a two-step mental translation between sound and character, none of these systems are useful for using manual Morse at practical speeds.Amateur radioclubs can provide resources to learn Morse code. Visual mnemonic charts have been devised over the ages.Baden-Powellincluded one in theGirl Guideshandbook[1]in 1918. Here is a more up-to-date version, ca. 1988: This visual map of Dits an Dahs is arranged from top to bottom by character length (e.g. first layer is just one sound — a dit or a dah for E or T, the second layer contains letters formed by two sounds, the third layer by three sounds, etc.) Dits to the Left and Dahs to the right. Other visual mnemonic systems have been created for Morse code, mapping the elements of the Morse code characters onto pictures for easy memorization. For instance, "R" (▄ ▄▄▄ ▄) might be represented as a "racecar" seen in a profile view, with the two wheels of the racecar being the dits and the body being the dah. Syllabic mnemonics are based on the principle of associating a word or phrase to each Morse code letter, withstressedsyllables standing for a dah and unstressed ones for a dit. There is no well-known complete set of syllabic mnemonics for English, but various mnemonics do exist for individual letters. This technique has you associate a word with each character. For a letter in the alphabet, the associated word will usually begin with the same letter. In that word, tall letters (those descending below the baseline or ascending above the mean line – b, d, f, g, h, j, k, l, p, q, t, or y) and capital letters represent dashes, while short letters (aceimnorsuvwxz) represent dots. To recall the Morse code for a character, try to visualize the word. This mnemonic uses the same mapping from tall and short letters to dashes and dots. Rather than each word starting with the letter it represents, each word is positioned in the 26-word-long sentence according to the position of the letter it represents in the alphabet.[5] my love life has a vibe, the same as edgy pop star DJ «Dr BBQ» adds — glad she won't cut away all good gold lyre! In Czech, themnemonic deviceto remember letters in Morse code lies in remembering words or short phrases that begin with each appropriate letter and have along vowel(i.e. á é í ó ú ý) for every dash and a short vowel (a e i o u y) for every dot. Additionally, some other sets of words with a particular theme have been thought up in Czech folklore, such as the following alcohol-themed set: InPolish, which does not distinguish long and short vowels, Morse mnemonics are also words or short phrases that begin with each appropriate letter, but dash is coded as a syllable containing an "o" (or "ó"), while a syllable containing another vowel codes for dot. For some letters, multiple mnemonics are in use; the table shows one example. Invented in 1922 by Zalman Cohen, a communication soldier in theHaganahorganization. Thehiriq(/i/ vowel) represents a dot and thepatahorqamatz(/a/ vowel) represent a dash. InIndonesia, one mnemonic commonly taught inScoutingis remembering words that begin with each appropriate letter and substituting theovowel for every dash and other vowels (a,i,u, ande) for every dot.
https://en.wikipedia.org/wiki/Morse_code_mnemonics
TheInternational Radiotelephony Spelling Alphabetor simply theRadiotelephony Spelling Alphabet, commonly known as theNATO phonetic alphabet, is the most widely used set of clear-code words for communicating the letters of theLatin/Roman alphabet. Technically aradiotelephonicspelling alphabet, it goes by various names, includingNATO spelling alphabet,ICAO phonetic alphabet, andICAO spelling alphabet. TheITU phonetic alphabet and figure codeis a rarely used variant that differs in the code words for digits. Although spelling alphabets are commonly called "phonetic alphabets", they are not phonetic in the sense ofphonetic transcriptionsystems such as theInternational Phonetic Alphabet. To create the code, a series of international agencies assigned 26 clear-code words (also known as "phonetic words")acrophonicallyto the letters of theLatin alphabet, with the goal that the letters and numbers would be easily distinguishable from one another over radio and telephone. The words were chosen to be accessible to speakers of English, French and Spanish. Some of the code words were changed over time, as they were found to be ineffective in real-life conditions. In 1956,NATOmodified the then-current set used by theInternational Civil Aviation Organization(ICAO): the NATO version was accepted by ICAO that year, and by theInternational Telecommunication Union(ITU) a few years later, thus becoming the international standard.[1] The 26 code words are as follows (ICAO spellings):Alfa,Bravo,Charlie,Delta,Echo,Foxtrot,Golf,Hotel,India,Juliett,Kilo,Lima,Mike,November,Oscar,Papa,Quebec,Romeo,Sierra,Tango,Uniform,Victor,Whiskey,X-ray,Yankee, andZulu.[Note 1]⟨Alfa⟩and⟨Juliett⟩are spelled that way to avoid mispronunciation by people unfamiliar with Englishorthography; NATO changed⟨X-ray⟩to⟨Xray⟩for the same reason.[2]The code words for digits are their English names, though with their pronunciations modified in the cases ofthree,four,five,nineandthousand. The code words have been stable since 1956. A 1955 NATO memo stated that: It is known that [the spelling alphabet] has been prepared only after the most exhaustive tests on a scientific basis by several nations. One of the firmest conclusions reached was that it was not practical to make an isolated change to clear confusion between one pair of letters. To change one word involves reconsideration of the whole alphabet to ensure that the change proposed to clear one confusion does not itself introduce others.[3] Soon after the code words were developed by ICAO (seehistorybelow), they were adopted by other national and international organizations, including the ITU, theInternational Maritime Organization(IMO), the United States Federal Government as Federal Standard 1037C: Glossary of Telecommunications Terms[4]and its successors ANSI T1.523-2001[5]andATISTelecom Glossary (ATIS-0100523.2019)[6](all three using the spellings "Alpha" and "Juliet"), the United States Department of Defense,[7]theFederal Aviation Administration(FAA) (using the spelling "Xray"), theInternational Amateur Radio Union(IARU), theAmerican Radio Relay League(ARRL), theAssociation of Public-Safety Communications Officials-International(APCO), and by many military organizations such as NATO (using the spelling "Xray") and the now-defunctSoutheast Asia Treaty Organization(SEATO).[citation needed] The same alphabetic code words are used by all agencies, but each agency chooses one of two different sets of numeric code words. NATO uses the regular English numerals (zero,one,two, etc., though with some differences in pronunciation), whereas the ITU (beginning on 1 April 1969)[8]and the IMO created compound code words (nadazero,unaone,bissotwoetc.). In practice the compound words are used very rarely.[citation needed] A spelling alphabet is used to distinguish those parts of a message that contain letters and digits, because the names of many letters sound similar, for instancebeeandpee,enandemorefandess. The potential for confusion increases if static or other interference is present, as is commonly the case with radio and telephonic communication. For instance, the target message "proceed to map grid DH98" would be transmitted asproceed to map grid Delta-Hotel-Niner-Ait. Civilian industry uses the code words to avoid similar problems in the transmission of messages by telephone systems. For example, it is often used in the retail industry where customer or site details are conveyed by telephone (for example to authorize a credit agreement or confirm stock codes), although ad-hoc code words are often used in that instance. It has been used by information technology workers to communicate serial numbers and reference codes, which are often very long, by voice. Most major airlines use the alphabet to communicatepassenger name records(PNRs) internally, and in some cases, with customers. It is often used in a medical context as well. Several codes words and sequences of code words have become well-known, such asBravo Zulu(letter code BZ) for "well done",[9]Checkpoint Charlie(Checkpoint C) in Berlin, andZulu TimeforGreenwich Mean TimeorCoordinated Universal Time. During theVietnam War, the US government referred to theViet Congguerrillas and the group itself as VC, or Victor Charlie; the name "Charlie" became synonymous with this force. The final choice of code words for the letters of the alphabet and for the digits was made after hundreds of thousands of comprehension tests involving 31 nationalities. The qualifying feature was the likelihood of a code word being understood in the context of others. For example,Footballhas a higher chance of being understood thanFoxtrotin isolation, butFoxtrotis superior in extended communication.[10] Pronunciations were set out by the ICAO before 1956 with advice from the governments of both the United States and United Kingdom.[11]To eliminate national variations in pronunciation, posters illustrating the pronunciation desired by ICAO are available.[12]However, there remain differences in the pronunciations published by ICAO and other agencies, and ICAO has apparently conflicting Latin-alphabet andIPAtranscriptions. At least some of these differences appear to be typographic errors. In 2022, theDeutsches Institut für Normung(DIN) attempted to resolve these conflicts.[13]For example, they consistently transcribe[a]for what the ICAO had transcribed variously as[a],[aː],[ɑ],[ɑː],[æ],[ə]in IPA and asa, ah, ar, erin orthography. Just as words are spelled out as individual letters, numbers are spelled out as individual digits. That is, 17 is rendered asone sevenand 60 assix zero. Depending on context, the wordthousandmay be used as in English, and, for whole hundreds only (when the sequence 00 occurs at the end of a number), the wordhundredmay be used. For example, 1300 is read asone three zero zeroif it is a transponder code or serial number, and asone thousand three hundredif it is an altitude or distance. The ICAO, NATO, and FAA use modifications of English digits as code words, with 3, 4, 5 and 9 being pronouncedtree,fower(rhymes withlower),fifeandniner. The digit 3 is specified astreeso that it will not be mispronouncedsri(and similarlythousandis pronouncedtousand); the long pronunciation of 4 (still found in some English dialects) keeps it somewhat distinct fromfor; 5 is pronounced with a second "f" because the normal pronunciation with a "v" is easily confused with "fire"; and 9 has an extra syllable to keep it distinct from the German wordnein"no".[14](Prior to 1956,threeandfivehad been pronounced with the English consonants, but with the vowels broken into two syllables.) For directions presented as the hour-hand position on a clock, the additional numerals "ten", "eleven" and "twelve" are used with the word "o'clock".[12]: 5–7 The ITU and IMO, however, specify a different set of code words. These are compounds of ICAO and Latinesque roots.[15]The IMO's GMDSS procedures permits the use of either set of code words.[15] There are two IPA transcriptions of the letter names, from theInternational Civil Aviation Organization(ICAO) and theDeutsches Institut für Normung(DIN). Both authorities indicate that anon-rhoticpronunciation is standard.[Note 2]That of the ICAO, first published in 1950 and reprinted many times without correction (e.g. the error in 'golf'), uses a large number of vowels. For instance, it has six low/central vowels:[æ][a][aː][ɑ][ɑː][ə]. The DIN consolidated all six into the single low-central vowel[a]. The DIN vowels are partly predictable, with[ɪɛɔ]in closed syllables and[ie/ei̯o]inopen syllablesapart fromechoandsierra, which have[ɛ]as in English, German and Italian. The DIN also reduced the number of stressed syllables inbravoandx-ray, consistent with the ICAO English respellings of those words and with the NATO change of spelling ofx-raytoxrayso that people would know to pronounce it as a single word. There is no authoritative IPA transcription of the digits. However, there are respellings into both English and French, which can be compared to clarify some of the ambiguities and inconsistencies. TheCombined Communications-Electronics Board(CCEB) has code words for punctuation, including those in the table below. Others are: "colon", "semi-colon", "exclamation mark", "question mark", "apostrophe", "quote", and "unquote".[18] Prior toWorld War Iand the development and widespread adoption of two-way radio that supported voice,telephone spelling alphabetswere developed to improve communication on low-quality and long-distance telephone circuits. The first non-military internationally recognized spelling alphabet was adopted by the CCIR (predecessor of theITU) during 1927. The experience gained with that alphabet resulted in several changes being made during 1932 by the ITU. The resulting alphabet was adopted by the International Commission for Air Navigation, the predecessor of the ICAO, and was used for civil aviation untilWorld War II.[11]It continued to be used by the IMO until 1965. Throughout World War II, many nations used their own versions of a spelling alphabet. The US adopted theJoint Army/Navy radiotelephony alphabetduring 1941 to standardize systems among all branches of its armed forces. The US alphabet became known asAble Bakerafter the words for A and B. TheRoyal Air Forceadopted one similar to theUnited Statesone during World War II as well. Other British forces adopted theRAF radio alphabet, which is similar to the phonetic alphabet used by theRoyal Navyduring World War I. At least two of the terms are sometimes still used by UK civilians to spell words over the phone, namelyF for FreddieandS for Sugar. To enable the US, UK, and Australian armed forces to communicate during joint operations, in 1943 the CCB (Combined Communications Board; the combination of US and UK upper military commands) modified the US military's Joint Army/Navy alphabet for use by all three nations, with the result being called the US-UK spelling alphabet. It was defined in one or more of CCBP-1:Combined Amphibious Communications Instructions, CCBP3:Combined Radiotelephone (R/T) Procedure, and CCBP-7:Combined Communication Instructions.The CCB alphabet itself was based on the US Joint Army/Navy spelling alphabet. The CCBP (Combined Communications Board Publications) documents contain material formerly published in US Army Field Manuals in the 24-series. Several of these documents had revisions, and were renamed. For instance, CCBP3-2 was the second edition of CCBP3. During World War II, the US military conducted significant research into spelling alphabets. Major F. D. Handy, directorate of Communications in the Army Air Force (and a member of the working committee of the Combined Communications Board), enlisted the help of Harvard University's Psycho-Acoustic Laboratory, asking them to determine the most successful word for each letter when using "military interphones in the intense noise encountered in modern warfare." He included lists from the US, Royal Air Force, Royal Navy, British Army, AT&T, Western Union, RCA Communications, and that of the International Telecommunications Convention. According to a report on the subject: The results showed that many of the words in the military lists had a low level of intelligibility, but that most of the deficiencies could be remedied by the judicious selection of words from the commercial codes and those tested by the laboratory. In a few instances where none of the 250 words could be regarded as especially satisfactory, it was believed possible to discover suitable replacements. Other words were tested and the most intelligible ones were compared with the more desirable lists. A final NDRC list was assembled and recommended to the CCB.[25] After World War II, with many aircraft and ground personnel from the allied armed forces, "Able Baker" was officially adopted for use in international aviation. During the 1946 Second Session of the ICAO Communications Division, the organization adopted the so-called "Able Baker" alphabet[10]that was the 1943 US–UK spelling alphabet. However, many sounds were unique to English, so an alternative "Ana Brazil" alphabet was used in Latin America. In spite of this,International Air Transport Association (IATA), recognizing the need for a single universal alphabet, presented a draft alphabet to the ICAO during 1947 that had sounds common to English, French, Spanish and Portuguese. From 1948 to 1949,Jean-Paul Vinay, a professor of linguistics at theUniversité de Montréal, worked closely with the ICAO to research and develop a new spelling alphabet.[26][10]The directions of ICAO were that "To be considered, a word must: After further study and modification by each approving body, the revised alphabet was adopted on1 November 1951, to become effective on 1 April 1952 for civil aviation (but it may not have been adopted by any military).[11] Problems were soon found with this list. Some users believed that they were so severe that they reverted to the old "Able Baker" alphabet. Confusion among words likeDeltaandExtra, and betweenNectarandVictor, or the poor intelligibility of other words during poor receiving conditions were the main problems. Later in 1952, ICAO decided to revisit the alphabet and their research. To identify the deficiencies of the new alphabet, testing was conducted among speakers from 31 nations, principally by the governments of the United Kingdom and the United States. In the United States, the research was conducted by the USAF-directed Operational Applications Laboratory (AFCRC, ARDC), to monitor a project with the Research Foundation ofOhio State University. Among the more interesting of the research findings was that "higher noise levels do not create confusion, but do intensify those confusions already inherent between the words in question".[25] By early 1956 the ICAO was nearly complete with this research, and published the new official phonetic alphabet in order to account for discrepancies that might arise in communications as a result of multiple alphabet naming systems coexisting in different places and organizations. NATO was in the process of adopting the ICAO spelling alphabet, and apparently felt enough urgency that it adopted the proposed new alphabet with changes based on NATO's own research, to become effective on 1 January 1956,[27]but quickly issued a new directive on 1 March 1956[28]adopting the now official ICAO spelling alphabet, which had changed by one word (November) from NATO's earlier request to ICAO to modify a few words based on US Air Force research. After all of the above study, only the five words representing the letters C, M, N, U, and X were replaced. The ICAO sent a recording of the newRadiotelephony Spelling Alphabetto all member states in November 1955.[10]The final version given in thetable abovewas implemented by the ICAO on1 March 1956,[11]and the ITU adopted it no later than 1959 when they mandated its usage via their official publication,Radio Regulations.[29]Because the ITU governs all international radio communications, it was also adopted by most radio operators, whether military, civilian, oramateur. It was finally adopted by the IMO in 1965. During 1947 the ITU adopted the compoundLatinateprefix-number words (Nadazero,Unaone, etc.), later adopted by the IMO during 1965.[citation needed] In the official version of the alphabet,[31]two spellings deviate from the English norm:AlfaandJuliett.Alfais spelled with anfas it is in most European languages because the spellingAlphamay not be pronounced properly by native speakers of some languages – who may not know thatphshould be pronounced asf. The spellingJuliettis used rather thanJulietfor the benefit of French speakers, because they may otherwise treat a single finaltas silent. For similar reasons,CharlieandUniformhave alternative pronunciations where thechis pronounced "sh" and theuis pronounced "oo". Early on, the NATO alliance changedX-raytoXrayin its version of the alphabet to ensure that it would be pronounced as one word rather than as two,[32]while the global organization ICAO keeps the spellingX-ray. The alphabet is defined by various international conventions on radio, including: [45][42] For the 1938 and 1947 phonetics, each transmission of figures is preceded and followed by the words "as a number" spoken twice. The ITU adopted theIMOphonetic spelling alphabet in 1959,[47]and in 1969 specified that it be "for application in the maritime mobile service only".[48] Pronunciation was not defined prior to 1959. For the post-1959 phonetics, the underlined syllable of each letter word should be emphasized, and each syllable of the code words for the post-1969 figures should be equally emphasized. The Radiotelephony Spelling Alphabet is used by theInternational Civil Aviation Organizationfor international aircraft communications.[31][12] [45][42] The ITU-R Radiotelephony Alphabet is used by theInternational Maritime Organizationfor international marine communications. Since "Nectar" was changed to "November" in 1956, the code has been mostly stable. However, there is occasional regional substitution of a few code words, such as replacing them with earlier variants, to avoid confusion with local terminology.
https://en.wikipedia.org/wiki/NATO_phonetic_alphabet
ARadio codeis anycodethat is commonly used over atelecommunicationsystem such asMorse code,brevity codesandprocedure words. Brevity codes are designed to convey complex information with a few words or codes. Specific brevity codes include: Brevity codes that are specifically designed for use between communications operators and to support communication operations are referred to as "operating signals". These include: Morse code, is commonly used in Amateur radio.Morse code abbreviationsare a type of brevity code.Procedure wordsused inradiotelephony procedure, are a type of radio code.Spelling alphabets, including theICAO spelling alphabet, are commonly used in communication over radios and telephones. Many car audio systems (car radios) have a so-called 'radio code' number which needs to be entered after a power disconnection. This was introduced as a measure to deter theft of these devices. If the code is entered correctly, the radio is activated for use. Entering the code incorrectly several times in a row will cause a temporary or permanent lockout. Some car radios have another check which operates in conjunction with car electronics. If theVINor another vehicle ID matches the previously stored one, the radio is activated. If the radio cannot verify the vehicle, it is considered to be moved into another vehicle. The radio will then request for the code number or simply refuse to operate and display an error message such as "CANCHECK" or "SECURE".
https://en.wikipedia.org/wiki/Radio_code
Thetap code, sometimes called theknock code, is a way to encode text messages on a letter-by-letter basis in a very simple way. The message is transmitted using a series of tap sounds, hence its name.[1] The tap code has been commonly used by prisoners to communicate with each other. The method of communicating is usually by tapping either the metal bars, pipes or the walls inside a cell. The tap code is based on aPolybius squareusing a 5×5 grid of letters representing all the letters of theLatin alphabet, except for K, which is represented by C. Each letter is communicated by tapping two numbers, the first designating the row and the second (after a pause) designating the column. For example, to specify the letter "B", one taps once, pauses, and then taps twice. The listener only needs to discriminate the timing of the taps to isolate letters. To communicate the word "hello", the cipher would be the following (with the pause between each number in a pair being shorter than the pause between letters): The letter "X" is used to break up sentences, and "K" for acknowledgements. Because of the difficulty and length of time required for specifying a single letter, prisoners often devise abbreviations and acronyms for common items or phrases, such as "GN" forGood night, or "GBU" forGod bless you.[2] By comparison, despite its messages being shorter,Morse codeis harder to send by tapping or banging. Its short and long signals can be improvised as taps and thumps, or short and long whistles or scraping sounds, but tap codes are simpler to learn and can be used in a wider variety of situations.[3]The tap system simply requires one to know the alphabet and the short sequence "AFLQV" (the initial letter of each row), without memorising the entire grid. For example, if a person hears four knocks, they can think "A... F... L... Q". If after a pause there are three knocks, they think "Q... R... S" to arrive at the letter S. The origins of this encoding go back to thePolybius squareofAncient Greece. Like the "knock code", aCyrillic scriptversion is said to have been used bynihilistprisoners of theRussian czars.[5]The knock code is featured inArthur Koestler's 1941 workDarkness at Noon.[6]Kurt Vonnegut's 1952 novelPlayer Pianoalso includes a conversation between prisoners using a form of tap code. The code used in the novel is more primitive and does not make use of the Polybius square (e.g. "P" consists of sixteen taps in a row). United States prisoners of war during theVietnam Warare most known for having used the tap code. It was introduced in June 1965 by fourPOWsheld in theHỏa Lò ("Hanoi Hilton") prison: CaptainCarlyle "Smitty" Harris, LieutenantPhillip Butler, Lieutenant Robert Peel, and Lieutenant Commander Robert Shumaker.[2][7]Harris had heard of the tap code being used by prisoners in World War II[8]and remembered aUnited States Air Forceinstructor who had discussed it as well.[2][9] In Vietnam, the tap code became more widely used than Morse; despite messages taking longer to send, the system was easier to learn and could be applied in a wider variety of situations.[3]Tap codes proved to be a very successful[10]way for otherwise isolated prisoners to communicate.[8][11]POWs would use the tap code in order to communicate to each other between cells in a way which the guards would be unable to pick up on.[12]They used it to communicate everything from what questions interrogators were asking (in order for everyone to stay consistent with a deceptive story), to who was hurt and needed others to donate meager food rations. It was easy to teach and newly arrived prisoners became fluent in it within a few days.[13][14]It was even used when prisoners were sitting next to each other but not allowed to talk, by tapping on another's thigh.[14]U.S. Navy Rear AdmiralJeremiah Dentondeveloped a vocal tap code of coughs, sniffs and sneezes.[3]By overcoming isolation with the tap code, prisoners were said to be able to maintain achain of commandand keep up morale.[8][15] In 1980, a doctor sentenced to life in solitary confinement inSomaliaused tap code to share the entirety of Tolstoy'sAnna Karenina, nearly 2 million letters, via tap code with fellow prisoners.[16] In the video gameMetal Gear 2: Solid Snake, the tap code is used by Dr. Drago Pettrovich Madnar to communicate toSolid Snakethrough a cell wall. In Season 2 Episode 2 ofPerson of Interest, the tap code is used by Harold Finch to discreetly leave breadcrumbs of his location to John Reese by encoding his location as tap code on a telephone. In the video gameHer Story, the main characters use the tap code to surreptitiously communicate. In Season 2 Episode 14 ofThe Flash, the masked prisoner in Zoom's lair uses the tap code to try to communicate with the others. In the filmThe Ice Road, the tap code is used on a metal pipe conduit by trapped miners to communicate with executives of the mining company. In the television showBreaking Bad, the characterHector Salamancauses the version of a tap code shown in this section to communicate after suffering astroke. In an ARG forCool Math Games, (See Game Theory for more information), a tap code was used twice, once in pair with another cypher to solve and discover lore.
https://en.wikipedia.org/wiki/Tap_code
Atelegraph key,clacker,tapperormorse keyis a specialized electricalswitchused by a trained operator to transmit text messages inMorse codein atelegraphysystem.[1]Keys are used in all forms ofelectrical telegraphsystems, including landline (also called wire) telegraphy andradio (also called wireless) telegraphy. An operator uses the telegraph key to send electrical pulses (or in the case of modernCW, unmodulated radio waves) of two different lengths: short pulses, calleddotsordits, and longer pulses, calleddashesordahs. These pulses encode the letters and other characters that spell out the message. The first telegraph key was invented byAlfred Vail, an associate ofSamuel Morse.[2]Since then the technology has evolved and improved, resulting in a range of key designs.[3] Astraight keyis the common telegraph key as seen in various movies. It is a simple bar with a knob on top and an electrical contact underneath. When the bar is pressed down against spring tension, it makes a closed electric circuit.[4]Traditionally, American telegraph keys had flat topped knobs and narrow bars (frequently curved), while European telegraph keys had ball shaped knobs and thick bars. This appears to be purely a matter of culture and training, but the users of each are tremendously partisan.[a] Straight keys have been made in numerous variations for over 150 years and in numerous countries. They are the subject of an avid community of key collectors. The straight keys also had ashorting barthat closed the electrical circuit through the station when the operator was not actively sending messages. The shorting switch for an unused key was needed in telegraph systems wired in the style of North American railroads, in which the signal power was supplied from batteries only in telegraph offices at one or both ends of a line, rather than each station having its own bank of batteries, which was often used in Europe. The shorting bar completed the electrical path to the next station and all following stations, so that their sounders could respond to signals coming down the line, allowing the operator in the next town to receive a message from the central office. Although occasionally included in later keys for reasons of tradition, the shorting bar is unnecessary for radio telegraphy, except as a convenience to produce a steady signal for tuning the transmitter. The straight key is simple and reliable, but the rapid pumping action needed to send a string of dots (orditsas most operators call them) poses some medically significant drawbacks. Transmission speeds vary from 5 words (25 characters) per minute, by novice operators, up to about 30 words (150 characters) per minute by skilled operators. In the early days of telegraphy, a number of professional telegraphers developed arepetitive stress injuryknown asglass armortelegraphers’ paralysis.[5]"Glass arm" may be reduced or eliminated by increasing the side play of the straight key, by loosening the adjustabletrunnionscrews. Such problems can be avoided either by using good manual technique, or by only using side-to-side key types.[6][7][8] In addition to the basic up-and-down telegraph key, telegraphers have been experimenting with alternate key designs from the beginning of telegraphy. Many are made to move side-to-side instead of up-and-down. Some of the designs, such assideswipers(orbushwhackers) andsemi-automatickeys operate mechanically. Beginning in the mid-20th century electronic devices calledkeyershave been developed, which are operated by special keys of various designs generally categorized assingle-paddlekeys (also calledsideswipers), anddouble-paddlekeys (or "iambic"[b]or "squeeze" keys). The keyer may be either an independent device that attaches to the transmitter in place of a telegraph key, or circuitry incorporated in modern amateurs' radios. The first widely accepted alternative key was thesideswiperorsidewinder, sometimes called acootie keyorbushwhacker. This key uses a side-to-side action with contacts on both the left and right and the arm spring-loaded to return to center; the operator may make aditordahby swinging the lever in either direction. A series ofditscan be sent by rocking the arm back and forth. This first new style of key was introduced in part to increase speed of sending, but more importantly to reduce therepetitive strain injurywhich telegraphers called "glass arm". The side-to-side motion reduces strain, and uses different muscles than the up-and-down motion (called "pounding brass"). Nearly all advanced keys use some form of side-to-side action. The alternating action produces a distinctive rhythm orswingwhich noticeably affects the operator's transmission rhythm (known asfist). Although the original sideswiper is now rarely seen or used, when the left and right contacts are electrically separated a sideswiper becomes a modern single-paddle key (see below); likewise, a modern single-lever key becomes an old-style sideswiper when its two contacts are wired together. A popular side-to-side key is the semi-automatic key or "bug", sometimes known as aVibroplexkey after an early manufacturer of mechanical, semi-automatic keys. The original bugs were fully mechanical, based on a kind of simple clockwork mechanism, and required no electronic keyer. A skilled operator can achieve sending speeds in excess of 40 words per minute with a bug. The benefit of the clockwork mechanism is that it reduces the motion required from the telegrapher's hand, which provides greater speed of sending, and it produces uniformly timeddits(dots, or short pulses) and maintains constant rhythm; consistent timing and rhythm are crucial for decoding the signal on the other end of the telegraph line. The single paddle is held between the knuckle and the thumb of the right hand. When the paddle is pressed to the right (with the thumb), it kicks a horizontalpendulumwhich then rocks against the contact point, sending a series of short pulses (ditsor dots) at a speed which is controlled by the pendulum’s length. When the paddle is pressed toward the left (with the knuckle) it makes a continuous contact suitable for sendingdahs(dashes); the telegrapher remains responsible for timing thedahsto proportionally match thedits. The clockwork pendulum needs the extra kick that the stronger thumb press provides, which established the standard left-right paddle directions for thedit-dahassignments that persists on the paddles on 21st century electronic keys. A few semi-automatic keys were made with mirror-image mechanisms for left-handed telegraphers. Like semi-automatic keys, the telegrapher operates anelectronic keyerby tapping a paddle key, swinging its lever(s) from side-to-side. When pressed to one side (usually left), the keyer electronics generate a series ofdahs; when pressed to the other side (usually right), a series ofdits. Keyers work with two different types of keys: Single paddle and double paddle keys. Like semi-automatic keys, pressing the paddle on one side produces aditand the other adah. Single paddle keys are also calledsingle lever keysorsideswipers, the same name as the older side-to-side key design they greatly resemble. Double paddle keys are also called "iambic" keys[b]or "squeeze" keys. Also like the old semi-automatic keys, the conventional assignment of the paddle directions (for a right-handed telegrapher) is that pressing a paddle with the right thumb (pressing the single paddle rightward, or for a double-paddle key, pressing the left paddle with the thumb, rightwards towards the center) creates a series ofdits. Pressing a paddle with the right knuckle (hence swinging a single paddle leftward, or the right paddle on a double-paddle key leftward to the center) creates a series ofdahs. Left-handed telegraphers sometimes elect to reverse the electrical contacts, so their left-handed keying is a mirror image of standard right-handed keying. Single paddle keys are essentially the same as the original sideswiper keys, with the left and right electrical contacts wired separately. Double-paddle keys have one arm for each of the two contacts, each arm held away from the common center by a spring; pressing either of the paddles towards the center makes contact, the same as pressing a single-lever key to one side. For double-paddle keys wired to an "iambic" keyer, squeezing both paddles together makes a double-contact, which causes the keyer to send alternatingditsanddahs(ordahsanddits, depending on which lever makes first contact). Mostelectronic keyersincludedot and dash memoryfunctions, so the operator does not need to use perfect spacing betweenditsanddahsor vice versa. Withditordahmemory, the operator's keying action can be about oneditahead of the actual transmission. The electronics in the keyer adjusts the timing so that the output of each letter is machine-perfect. Electronic keyers allow very high speed transmission of code. Using akeyerin "iambic" mode requires a key with two paddles: One paddle producesdits and the other producesdahs. Pressing both at the same time (a "squeeze") produces an alternatingdit-dah-dit-dah(▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄) sequence, which starts with aditif theditside makes contact first, or adah(▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄) if thedahside connects first. An additional advantage of electronic keyers over semiautomatic keys is that code speed is easily changed with electronic keyers, just by turning a knob. With a semiautomatic key, the location of the pendulum weight and the pendulum spring tension and contact must all be repositioned and rebalanced to change theditspeed.[9] Keys having two separate levers, one forditsand the other fordahsare called dual or dual-lever paddles. With a dual paddle both contacts may be closed simultaneously, enabling the "iambic"[b]functions of an electronic keyer that is designed to support them: By pressing both paddles (squeezing the levers together) the operator can create a series of alternatingditsanddahs, analogous to a sequence ofiambs in poetry.[10][11]For that reason, dual paddles are sometimes calledsqueeze keysoriambic keys. Typical dual-paddle keys' levers move horizontally, like the earlier single-paddle keys, as opposed to how the original "straight-keys'" arms move up-and-down. Whether the sequence begins with aditor adahis determined by which lever makes contact first: If thedahlever is closed first, then the first element will be adah, so the string of elements will be similar to a sequence oftrocheesin poetry, and the method could logically just as well be called"trochaic keying"(▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄). If theditlever makes first contact, then the string begins with adit(▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄). Insofar as iambic[b]keying is a function of the electronic keyer, it is not correct, technically, to refer to a dual paddle keyitselfas "iambic", although this is commonly done in marketing. A dual paddle key is required for iambic sending, which also requires an iambic keyer. But any single- or dual-paddle key can be used non-iambicly, without squeezing, and there were electronic keyers made which did not have iambic functions. Iambic keying or squeeze keying reduces the key strokes or hand movements necessary to make some characters, e.g. the letter C, which can be sent by merely squeezing the two paddles together. With a single-paddle or non-iambic keyer, the hand motion would require alternating four times forC(dah-dit-dah-dit▄▄▄ ▄ ▄▄▄ ▄). The efficiency of iambic keying has recently been discussed in terms of movements per character and timings for high speed CW, with the author concluding that the timing difficulties of correctly operating a keyer iambicly at high speed outweigh any small benefits.[12] Iambic keyers function in one of at least two major modes: ModeAand modeB. There is a third, rarely available modeU. ModeAis the original iambic mode, in which alternate dots and dashes are produced as long as both paddles are depressed. ModeAis essentially "what you hear is what you get": When the paddles are released, the keying stops with the last dot or dash that was being sent while the paddles were held. ModeBis the second mode, which devolved from a logic error in an early iambic keyer.[citation needed]Over the years iambic modeBhas become something of a standard and is the default setting in most keyers. In modeB, dots and dashes are produced as long as both paddles are depressed. When the paddles are released, the keying continues by sendingone more elementthan has already been heard. I.e., if the paddles were released during adahthen the last element sent will be a followingdit; if the paddles were released during aditthen the sequence will end with the followingdah. Users accustomed to one mode may find it difficult to adapt to the other, so most modern keyers allow selection of the desired mode. A third electronic keyer mode useful with a dual paddle is the "Ultimatic" mode (modeU), so-called for the brand name of the electronic keyer that introduced it. In the Ultimatic keying mode, the keyer will switch to the opposite element if the second lever is pressed before the first is released (that is, squeezed). A single-lever paddle key has separate contacts forditsanddahs, but there is no ability to make both contacts simultaneously by squeezing the paddles together for iambic mode. When a single-paddle key is used with an electronic keyer, continuousditsare created by holding thedit-side paddle (▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄...); likewise, continuousdahsare created by holding thedahpaddle (▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄...). A single-paddle key cannon-iambicly operateanyelectronic keyer, whether or not it even offers iambic functions, and regardless of whether the keyer iambically operates in modeA,B, orU. Simple telegraph-like keys were long used to control the flow of electricity in laboratory tests of electrical circuits. Often, these were simple "strap" keys, in which a bend in the key lever provided the key's spring action. Telegraph-like keys were once used in the study ofoperant conditioningwithpigeons. Starting in the 1940s, initiated byB. F. SkinneratHarvard University, the keys were mounted vertically behind a small circular hole about the height of a pigeon's beak in the front wall of anoperant conditioning chamber. Electromechanical recording equipment detected the closing of the switch whenever the pigeon pecked the key. Depending on the psychological questions being investigated, keypecks might have resulted in the presentation of food or other stimuli. With straight keys, side-swipers, and, to an extent, bugs, each and every telegrapher has their own unique style or rhythm pattern when transmitting a message. An operator's style is known as their "fist". Since every fist is unique, other telegraphers can usually identify the individual telegrapher transmitting a particular message. This had a huge significance during the first and second World Wars, since the on-board telegrapher's fist could be used to track individual ships and submarines, and fortraffic analysis. However, with electronic keyers (either single- or double-paddle) this is no longer the case: Keyers produce uniformly "perfect" code at a set speed, which is altered at the request of the receiver, usually not the sender. Only inter-character and inter-word spacing remain unique to the operator, and can produce a less clear semblance of a fist.
https://en.wikipedia.org/wiki/Telegraph_key
Wabun code(和文モールス符号,wabun mōrusu fugō, Morse code for Japanese text)is a form ofMorse codeused to sendJapanese languageinkanacharacters.[1]UnlikeInternational Morse Code, which represents letters of theLatin script, in Wabun each symbol represents a Japanesekana.[2]For this reason,Wabun codeis also sometimes calledKana code.[3] When Wabun code is intermixed with International Morse code, theprosignDO(▄▄▄ ▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄) is used to announce the beginning of Wabun, and the prosignSN(▄ ▄ ▄ ▄▄▄ ▄) is used to announce the return to International Code. Kana inIrohaorder.
https://en.wikipedia.org/wiki/Wabun_code
Wireless telegraphyorradiotelegraphyis the transmission of text messages byradio waves, analogous toelectrical telegraphyusingcables.[1][2]Before about 1910, the termwireless telegraphywas also used for other experimental technologies for transmitting telegraph signals without wires.[3][4]In radiotelegraphy, information is transmitted by pulses of radio waves of two different lengths called "dots" and "dashes", which spell out text messages, usually inMorse code. In a manual system, the sending operator taps on a switch called atelegraph keywhich turns thetransmitteron and off, producing the pulses of radio waves. At thereceiverthe pulses are audible in the receiver's speaker as beeps, which are translated back to text by an operator who knows Morse code. Radiotelegraphy was the first means of radio communication. The first practical radiotransmittersandreceiversinvented in 1894–1895 byGuglielmo Marconiused radiotelegraphy.[5]It continued to be the only type of radio transmission during the first few decades of radio, called the "wireless telegraphy era" up untilWorld War I, when the development ofamplitude modulation(AM)radiotelephonyallowed sound (audio) to be transmitted by radio. Beginning about 1908, powerful transoceanic radiotelegraphy stations transmitted commercialtelegramtraffic between countries at rates up to 200 words per minute. Radiotelegraphy was used for long-distance person-to-person commercial, diplomatic, and military text communication throughout the first half of the 20th century.[6]It became a strategically important capability during the twoworld wars[7]since a nation without long-distance radiotelegraph stations could be isolated from the rest of the world by an enemy cutting itssubmarine telegraph cables. Radiotelegraphy remains popular inamateur radio. It is also taught by the military for use in emergency communications. However, commercial radiotelegraphy is obsolete.[8] Wireless telegraphy or radiotelegraphy, commonly called CW (continuous wave), ICW (interrupted continuous wave) transmission, oron-off keying, and designated by theInternational Telecommunication Unionasemission type A1A or A2A, is aradio communicationmethod. It was transmitted by several differentmodulationmethods during its history. The primitivespark-gap transmittersused until 1920 transmitteddamped waves, which had very widebandwidthand tended to interfere with other transmissions. This type of emission was banned by 1934, except for some legacy use on ships.[9][10][11]Thevacuum tube(valve) transmitters which came into use after 1920 transmitted code by pulses of unmodulatedsinusoidalcarrier wavecalledcontinuous wave(CW), which is still used today. To receive CW transmissions, the receiver requires a circuit called abeat frequency oscillator(BFO).[12][13]The third type of modulation,frequency-shift keying(FSK) was used mainly byradioteletypenetworks (RTTY). Morse code radiotelegraphy was gradually replaced by radioteletype in most high volume applications byWorld War II. In manual radiotelegraphy the sending operator manipulates aswitchcalled atelegraph key, which turns the radio transmitter on and off, producing pulses of unmodulatedcarrier waveof different lengths called "dots" and "dashes", which encode characters of text inMorse code.[14]At the receiving location, Morse code is audible in thereceiver's earphone or speaker as a sequence of buzzes or beeps, which is translated back to text by an operator who knows Morse code. With automatic radiotelegraphyteleprintersat both ends use a code such as theInternational Telegraph Alphabet No. 2and produced typed text. Radiotelegraphy is obsolete in commercial radio communication, and its last civilian use, requiring maritime shipping radio operators to use Morse code for emergency communications, ended in 1999 when theInternational Maritime Organizationswitched to the satellite-basedGMDSSsystem.[8]However it is still used byamateur radiooperators, and military services require signalmen to be trained in Morse code for emergency communication.[15][16]A CW coastal station,KSM, still exists in California, run primarily as a museum by volunteers,[17]and occasional contacts with ships are made. In a minor legacy use,VHF omnidirectional range (VOR)andNDBradio beaconsin the aviationradio navigationservice still transmit their one to three letteridentifiersin Morse code. Radiotelegraphy is popular amongstradio amateursworld-wide, who commonly refer to it ascontinuous wave, or just CW. A 2021 analysis of over 700 million communications logged by the Club Log blog,[18]and a similar review of data logged by theAmerican Radio Relay League,[19]both show that wireless telegraphy is the 2nd most popular mode ofamateur radiocommunication, accounting for nearly 20% of contacts. This makes it more popular than voice communication, but not as popular as theFT8digital mode, which accounted for 60% ofamateur radio contactsmade in 2021. Since 2003, knowledge of Morse code and wireless telegraphy has no longer been required to obtain an amateur radio license in many countries,[20]it is, however, still required in some countries to obtain a licence of a different class. As of 2021, licence Class A in Belarus and Estonia, or the General class in Monaco, or Class 1 in Ukraine require Morse proficiency to access the full amateur radio spectrum including thehigh frequency(HF) bands.[20]Further,CEPTClass 1 licence in Ireland,[21]and Class 1 in Russia,[20]both of which require proficiency in wireless telegraphy, offer additional privileges: a shorter and more desirablecall signin both countries, and the right to use a higher transmit power in Russia.[22] Efforts to find a way to transmit telegraph signals without wires grew out of the success ofelectric telegraphnetworks, the first instant telecommunication systems.[23]Developed beginning in the 1830s, atelegraph linewas a person-to-person text message system consisting of multipletelegraph officeslinked by an overhead wire supported ontelegraph poles. To send a message, an operator at one office would tap on aswitchcalled atelegraph key, creating pulses of electric current which spelled out a message inMorse code. When the key was pressed, it would connect abatteryto the telegraph line, sending current down the wire. At the receiving office, the current pulses would operate atelegraph sounder, a device that would make a "click" sound when it received each pulse of current. The operator at the receiving station who knew Morse code would translate the clicking sounds to text and write down the message. Thegroundwas used as the return path for current in the telegraph circuit, to avoid having to use a second overhead wire.[24] By the 1860s, the telegraph was the standard way to send most urgent commercial, diplomatic and military messages, and industrial nations had built continent-wide telegraph networks, withsubmarine telegraph cablesallowing telegraph messages to bridge oceans.[25]However installing and maintaining atelegraph linelinking distant stations was very expensive, and wires could not reach some locations such as ships at sea. Inventors realized if a way could be found to send electrical impulses of Morse code between separate points without a connecting wire, it could revolutionize communications. The successful solution to this problem was the discovery ofradio wavesin 1887, and the development of practical radiotelegraphy transmitters and receivers by about 1899.[26] Over several years starting in 1894, the Italian inventorGuglielmo Marconiworked on adapting the newly discovered phenomenon of radio waves to communication, turning what was essentially a laboratory experiment up to that point into a useful communication system,[27][28]building the first radiotelegraphy system using them.[29]Preece and theGeneral Post Office(GPO) in Britain at first supported and gave financial backing to Marconi's experiments conducted onSalisbury Plainfrom 1896. Preece had become convinced of the idea through his experiments with wireless induction. However, the backing was withdrawn when Marconi formed theWireless Telegraph & Signal Company. GPO lawyers determined that the system was a telegraph under the meaning of theTelegraph Actand thus fell under the Post Office monopoly. This did not seem to hold back Marconi.[30]: 243–244After Marconi sent wireless telegraphic signals across the Atlantic Ocean in 1901, the system began being used for regular communication including ship-to-shore and ship-to-ship communication.[31] With this development, wireless telegraphy came to meanradiotelegraphy,Morse codetransmitted by radio waves. The firstradio transmitters, primitivespark gap transmittersused until World War I, could not transmit voice (audio signals). Instead, the operator would send the text message on atelegraph key, which turned the transmitter on and off, producing short ("dot") and long ("dash") pulses of radio waves, groups of which comprised the letters and other symbols of the Morse code. At the receiver, the signals could be heard as musical "beeps" in theearphonesby the receiving operator, who would translate the code back into text. By 1910, communication by what had been called "Hertzian waves" was being universally referred to as "radio",[32]and the term wireless telegraphy has been largely replaced by the more modern term "radiotelegraphy". The primitivespark-gap transmittersused until 1920 transmitted by a modulation method calleddamped wave. As long as the telegraph key was pressed, the transmitter would produce a string of transient pulses of radio waves which repeated at an audio rate, usually between 50 and several thousandhertz.[33]In a receiver's earphone, this sounded like a musical tone, rasp or buzz. Thus the Morse code "dots" and "dashes" sounded like beeps. Damped wave had a large frequencybandwidth, meaning that the radio signal was not a single frequency but occupied a wide band of frequencies. Damped wave transmitters had a limited range and interfered with the transmissions of other transmitters on adjacent frequencies.[34] After 1905 new types of radiotelegraph transmitters were invented which transmitted code using a new modulation method:continuous wave(CW)[35](designated by theInternational Telecommunication Unionas emission type A1A).[36]As long as the telegraph key was pressed, the transmitter produced a continuoussinusoidal waveof constant amplitude.[35]Since all the radio wave's energy was concentrated at a single frequency, CW transmitters could transmit further with a given power, and also caused virtually no interference to transmissions on adjacent frequencies. The first transmitters able to produce continuous wave were thearc converter(Poulsen arc) transmitter, invented by Danish engineerValdemar Poulsenin 1903,[37]and theAlexanderson alternator, invented 1906–1912 byReginald FessendenandErnst Alexanderson.[38]These slowly replaced the spark transmitters in high power radiotelegraphy stations. However, the radio receivers used for damped wave could not receive continuous wave. Because the CW signal produced while the key was pressed was just an unmodulatedcarrier wave, it made no sound in a receiver's earphones.[39]To receive a CW signal, some way had to be found to make the Morse code carrier wave pulses audible in a receiver. This problem was solved by Reginald Fessenden in 1901. In his "heterodyne" receiver, the incoming radiotelegraph signal is mixed in the receiver'sdetectorcrystal or vacuum tube with a constant sine wave generated by anelectronic oscillatorin the receiver called abeat frequency oscillator(BFO). The frequency of the oscillatorfBFO{\displaystyle f_{\text{BFO}}}is offset from the radio transmitter's frequencyfIN{\displaystyle f_{\text{IN}}}. In the detector the two frequencies subtract, and abeat frequency(heterodyne) at the difference between the two frequencies is produced:fBEAT=|fIN−fBFO|{\displaystyle f_{\text{BEAT}}=|f_{\text{IN}}-f_{\text{BFO}}|}.[40]If the BFO frequency is near enough to the radio station's frequency, the beat frequency is in theaudio frequencyrange and can be heard in the receiver's earphones.[40]During the "dots" and "dashes" of the signal, the beat tone is produced, while between them there is no carrier so no tone is produced. Thus the Morse code is audible as musical "beeps" in the earphones. The BFO was rare until the invention in 1913 of the first practical electronic oscillator, the vacuum tube feedbackoscillatorbyEdwin Armstrong. After this time BFOs were a standard part of radiotelegraphy receivers. Each time the radio was tuned to a different station frequency, the BFO frequency had to be changed also, so the BFO oscillator had to be tunable. In latersuperheterodynereceivers from the 1930s on, the BFO signal was mixed with the constantintermediate frequency(IF) produced by the superheterodyne's detector. Therefore, the BFO could be a fixed frequency.[41] Continuous-wave vacuum tube transmitters replaced the other types of transmitter with the availability of power tubes afterWorld War Ibecause they were cheap. CW became the standard method of transmitting radiotelegraphy by the 20s, damped wave spark transmitters were banned by 1930[10]and CW continues to be used today. Even today mostcommunications receiversproduced for use in shortwave communication stations have BFOs.[42] The International Radiotelegraph Union was unofficially established at thefirst International Radiotelegraph Conventionin 1906, and was merged into theInternational Telecommunication Unionin 1932.[43]When the United States entered World War I, private radiotelegraphy stations were prohibited, which put an end to several pioneers' work in this field.[44]By the 1920s, there was a worldwide network of commercial and government radiotelegraphic stations, plus extensive use of radiotelegraphy by ships for both commercial purposes and passenger messages.[10]The transmission of sound (radiotelephony) began to displace radiotelegraphy by the 1920s for many applications, making possible radiobroadcasting.[45]Wireless telegraphy continued to be used for private person-to-person business, governmental, and military communication, such astelegramsanddiplomatic communications, and evolved intoradioteletypenetworks.[46]The ultimate implementation of wireless telegraphy wastelex, using radio signals, which was developed in the 1930s and was for many years the only reliable form of communication between many distant countries.[47]The most advanced standard,CCITTR.44, automated both routing and encoding of messages byshort wavetransmissions.[48] Today, due to more modern text transmission methods, Morse code radiotelegraphy for commercial use has become obsolete. On shipboard, the computer and satellite-linkedGMDSSsystem have largely replaced Morse as a means of communication.[49][50] Continuous wave(CW) radiotelegraphy is regulated by theInternational Telecommunication Union(ITU) as emission type A1A.[36] The USFederal Communications Commissionissues a lifetime commercial Radiotelegraph Operator License. This requires passing a simple written test on regulations, a more complex written exam on technology, and demonstrating Morse reception at 20 words per minute plain language and 16 wpm code groups. (Credit is given for amateur extra class licenses earned under the old 20 wpm requirement.)[51]
https://en.wikipedia.org/wiki/Wireless_telegraphy
Incryptography, adistributed point functionis acryptographic primitivethat allows two distributed processes to share a piece of information, and compute functions of their shared information, without revealing the information itself to either process. It is a form ofsecret sharing.[1] Given any two valuesa{\displaystyle a}andb{\displaystyle b}one can define apoint functionPa,b(x){\displaystyle P_{a,b}(x)}(a variant of theKronecker deltafunction) by That is, it is zero everywhere except ata{\displaystyle a}, where its value isb{\displaystyle b}.[1] A distributed point function consists of a family of functionsfk{\displaystyle f_{k}}, parameterized by keysk{\displaystyle k}, and a method for deriving two keysq{\displaystyle q}andr{\displaystyle r}from any two input valuesa{\displaystyle a}andb{\displaystyle b}, such that for allx{\displaystyle x}, where⊕{\displaystyle \oplus }denotes thebitwise exclusive orof the two function values. However, given only one of these two keys, the values off{\displaystyle f}for that key should be indistinguishable from random.[1] It is known how to construct an efficient distributed point function from another cryptographic primitive, aone-way function.[1] In the other direction, if a distributed point function is known, then it is possible to performprivate information retrieval. As a simplified example of this, it is possible to test whether a keya{\displaystyle a}belongs to replicated distributed database without revealing to the database servers (unless they collude with each other) which key was sought. To find the keya{\displaystyle a}in the database, create a distributed point function forPa,1(x){\displaystyle P_{a,1}(x)}and send the resulting two keysq{\displaystyle q}andr{\displaystyle r}to two different servers holding copies of the database. Each copy applies its functionfq{\displaystyle f_{q}}orfr{\displaystyle f_{r}}to all the keys in its copy of the database, and returns the exclusive or of the results. The two returned values will differ ifa{\displaystyle a}belongs to the database, and will be equal otherwise.[1]
https://en.wikipedia.org/wiki/Distributed_point_function
In the history ofcryptography, agrille cipherwas a technique for encrypting aplaintextby writing it onto a sheet of paper through a pierced sheet (of paper orcardboardor similar). The earliest known description is due toJacopo Silvestriin 1526.[1]His proposal was for a rectangularstencilallowing single letters, syllables, or words to be written, then later read, through its various apertures. The written fragments of the plaintext could be further disguised by filling the gaps between the fragments with anodyne words or letters. This variant is also an example ofsteganography, as are many of the grille ciphers. The Cardan grille was invented as a method of secret writing. The wordcryptographybecame the more familiar term for secret communications from the middle of the 17th century. Earlier, the wordsteganographywas common.[citation needed]The other general term for secret writing wascypher- also speltcipher. There is a modern distinction between cryptography and steganography SirFrancis Bacongave three fundamental conditions for ciphers. Paraphrased, these are: It is difficult to fulfil all three conditions simultaneously. Condition 3 applies to steganography. Bacon meant that a cipher message should, in some cases, not appear to be a cipher at all. The original Cardan Grille met that aim. Variations on the Cardano original, however, were not intended to fulfill condition 3 and generally failed to meet condition 2 as well. But, few if any ciphers have ever achieved this second condition, so the point is generally a cryptanalyst's delight whenever the grille ciphers are used. The attraction of a grille cipher for users lies in its ease of use (condition 1). In short, it's very simple. Not all ciphers are used for communication with others: records and reminders may be kept in cipher for use of the author alone. A grille is easily usable for protection of brief information such as a key word or a key number in such a use. In the case of communication by grille cipher, both sender and recipient must possess an identical copy of the grille. The loss of a grille leads to the probable loss of all secret correspondence encrypted with that grille. Either the messages cannot be read (i.e., decrypted) or someone else (with the lost grille) may be reading them. A further use for such a grille has been suggested: it is a method of generatingpseudo-random sequencesfrom a pre-existing text. This view has been proposed in connection with theVoynich manuscript. It is an area of cryptography that David Kahn termed enigmatology and touches on the works ofDr John Deeand cipherssupposedly embedded in the works of Shakespeareproving thatFrancis Baconwrote them, whichWilliam F. Friedmanexamined and discredited.[2] The Elizabethan spymasterSir Francis Walsingham(1530–1590) is reported to have used a "trellis" to conceal the letters of a plaintext in communication with his agents. However, he generally preferred the combined code-cipher method known as anomenclator, which was the practical state-of-the-art in his day. The trellis was described as a device with spaces that was reversible. It appears to have been a transposition tool that produced something much like theRail fencecipher and resembled a chess board. Cardano is not known to have proposed this variation, but he was a chess player who wrote a book on gaming, so the pattern would have been familiar to him. Whereas the ordinary Cardan grille has arbitrary perforations, if his method of cutting holes is applied to the white squares of a chess board a regular pattern results. The encipherer begins with the board in the wrong position for chess. Each successive letter of the message is written in a single square. If the message is written vertically, it is taken off horizontally and vice versa. After filling in 32 letters, the board is turned through 90 degrees and another 32 letters written (note that flipping the board horizontally or vertically is the equivalent). Shorter messages are filled with null letters (i.e.,padding). Messages longer than 64 letters require another turn of the board and another sheet of paper. If the plaintext is too short, each square must be filled up entirely with nulls. J M T H H D L I S I Y P S L U I A O W A E T I E E N W A P D E N E N E L G O O N N A I T E E F N K E R L O O N D D N T T E N R X This transposition method produces an invariant pattern and is not satisfactorily secure for anything other than cursory notes. 33, 5, 41, 13, 49, 21, 57, 29, 1, 37, 9, 45, 17, 53, 25, 61, 34, 6, 42, 14, 50, 22, 58, 30, 2, 38, 10, 46, 18, 54, 26, 62, 35, 7, 43, 15, 51, 23, 59, 31, 3, 39, 11, 47, 19, 55, 27, 63, 36, 8, 44, 16, 52, 24, 60, 32, 4, 40, 12, 48, 20, 56, 28, 64 A second transposition is needed to obscure the letters. Following the chess analogy, the route taken might be the knight's move. Or some other path can be agreed upon, such as a reverse spiral, together with a specific number of nulls to pad the start and end of a message. Rectangular Cardan grilles can be placed in four positions. The trellis or chessboard has only two positions, but it gave rise to a more sophisticated turning grille with four positions that can be rotated in two directions. BaronEdouard Fleissner von Wostrowitz, a retired Austrian cavalry colonel, described a variation on the chess board cipher in 1880 and his grilles were adopted by the German army during World War I. These grilles are often named after Fleissner, although he took his material largely from a German work, published in Tübingen in 1809, written by Klüber who attributed this form of the grille to Cardano, as didHelen Fouché Gaines.[3] Bauer notes that grilles were used in the 18th century, for example in 1745 in the administration of the Dutch Stadthouder William IV. Later, the mathematician C. F. Hindenburg studied turning grilles more systematically in 1796. '[they]are often called Fleissner grilles in ignorance of their historical origin.' One form of the Fleissner (or Fleißner) grille makes 16 perforations in an 8x8 grid – 4 holes in each quadrant. If the squares in each quadrant are numbered 1 to 16, all 16 numbers must be used once only. This allows many variations in placing the apertures. The grille has four positions – North, East, South, West. Each position exposes 16 of the 64 squares. The encipherer places the grille on a sheet and writes the first 16 letters of the message. Then, turning the grille through 90 degrees, the second 16 are written, and so on until the grid is filled. It is possible to construct grilles of different dimensions; however, if the number of squares in one quadrant is odd, even if the total is an even number, one quadrant or section must contain an extra perforation. Illustrations of the Fleissner grille often take a 6x6 example for ease of space; the number of apertures in one quadrant is 9, so three quadrants contain 2 apertures and one quadrant must have 3. There is no standard pattern of apertures: they are created by the user, in accordance with the above description, with the intention of producing a good mix. The method gained wide recognition whenJules Verneused a turning grille as a plot device in his novelMathias Sandorf, published in 1885. Verne had come across the idea in Fleissner's treatiseHandbuch der Kryptographiewhich appeared in 1881. Fleissner Grilles were constructed in various sizes during World War I and were used by the German Army at the end of 1916.[4]Each grille had a different code name:- 5x5 ANNA; 6X6 BERTA; 7X7 CLARA; 8X8 DORA; 9X9 EMIL; 10X10 FRANZ. Their security was weak, and they were withdrawn after four months. Another method of indicating the size of the grille in use was to insert a key code at the start of the cipher text: E = 5; F = 6 and so on. The grille can also be rotated in either direction and the starting position does not need to be NORTH. Clearly the working method is by arrangement between sender and receiver and may be operated in accordance with a schedule. In the following examples, two cipher texts contain the same message. They are constructed from the example grille, beginning in the NORTH position, but one is formed by rotating the grille clockwise and the other anticlockwise. The ciphertext is then taken off the grid in horizontal lines - but it could equally be taken off vertically. CLOCKWISE ITIT ILOH GEHE TCDF LENS IIST FANB FSET EPES HENN URRE NEEN TRCG PR&I ODCT SLOE ANTICLOCKWISE LEIT CIAH GTHE TIDF LENB IIET FONS FSST URES NEDN EPRE HEEN TRTG PROI ONEC SL&C In 1925Luigi Saccoof theItalian Signals Corpsbegan writing a book on ciphers which included reflections on the codes of the Great War,Nozzioni di crittografia. He observed that Fleissner's method could be applied to a fractionating cipher, such as aDelastelleBifidorFour-Square, with considerable increase in security. Grille ciphers are also useful device for transposing Chinese characters; they avoid the transcription of words into alphabetic or syllabic characters to which other ciphers (for example,substitution ciphers) can be applied. After World War I, machine encryption made simple cipher devices obsolete, and grille ciphers fell into disuse except for amateur purposes. Yet, grilles provided seed ideas for transposition ciphers that are reflected in modern cryptography. The unsolvedD'Agapeyeff cipher, which was set as a challenge in 1939, contains 14x14 dinomes and might be based on Sacco's idea of transposing a fractionated cipher text by means of a grille. The distribution of grilles, an example of the difficult problem ofkey exchange, can be eased by taking a readily-available third-party grid in the form of a newspaper crossword puzzle. Although this is not strictly a grille cipher, it resembles the chessboard with the black squares shifted and it can be used in the Cardan manner. The message text can be written horizontally in the white squares and theciphertexttaken off vertically, or vice versa. CTATI ETTOL TTOEH RRHEI MUCKE SSEEL AUDUE RITSC VISCH NREHE LEERD DTOHS ESDNN LEWAC LEONT OIIEA RRSET LLPDR EIVYT ELTTD TOXEA E4TMI GIUOD PTRT1 ENCNE ABYMO NOEET EBCAL LUZIU TLEPT SIFNT ONUYK YOOOO Again, following Sacco's observation, this method disrupts a fractionating cipher such asSeriated Playfair. Crosswords are also a possible source of keywords. A grid of the size illustrated has a word for each day of the month, the squares being numbered. The original Cardano Grille was a literary device for gentlemen's private correspondence. Any suspicion of its use can lead to discoveries of hidden messages where no hidden messages exist at all, thus confusing the cryptanalyst. Letters and numbers in a random grid can take shape without substance. Obtaining the grille itself is a chief goal of the attacker. But all is not lost if a grille copy can't be obtained. The later variants of the Cardano grille present problems which are common to all transposition ciphers.Frequency analysiswill show a normal distribution of letters, and will suggest the language in which the plaintext was written.[5]The problem, easily stated though less easily accomplished, is to identify the transposition pattern and so decrypt the ciphertext. Possession of several messages written using the same grille is a considerable aid. Gaines, in her standard work on hand ciphers and theircryptanalysis, gave a lengthy account of transposition ciphers, and devoted a chapter to the turning grille.[3]
https://en.wikipedia.org/wiki/Grille_(cryptography)
Incomputational complexity theory,asymptotic computational complexityis the usage ofasymptotic analysisfor the estimation of computational complexity ofalgorithmsandcomputational problems, commonly associated with the usage of thebig O notation. With respect tocomputational resources,asymptotictime complexityandasymptoticspace complexityof computational algorithms and programs are commonly estimated. Other asymptotically estimated behavior includecircuit complexityand various measures ofparallel computation, such as the number of (parallel) processors. Since the ground-breaking 1965 paper byJuris HartmanisandRichard E. Stearns[1]and the 1979 book byMichael GareyandDavid S. JohnsononNP-completeness,[2]the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity. Further, unless specified otherwise, the term "computational complexity" usually refers to theupper boundfor the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g..O(n3).{\displaystyle O(n^{3}).}Other types of (asymptotic) computational complexity estimates arelower bounds("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(nlogn)). A furthertacit assumptionis that theworst case analysisof computational complexity is in question unless stated otherwise. An alternative approach isprobabilistic analysis of algorithms. In most practical casesdeterministic algorithmsorrandomized algorithmsare discussed, althoughtheoretical computer sciencealso considersnondeterministic algorithmsand other advancedmodels of computation.
https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
Inmathematics, anasymptotic expansion,asymptotic seriesorPoincaré expansion(afterHenri Poincaré) is aformal seriesof functions which has the property thattruncatingthe series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations byDingle (1973)revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function. The theory of asymptotic series was created by Poincaré (and independently byStieltjes) in 1886.[1] The most common type of asymptotic expansion is apower seriesin either positive or negative powers. Methods of generating such expansions include theEuler–Maclaurin summation formulaand integral transforms such as theLaplaceandMellintransforms. Repeatedintegration by partswill often lead to an asymptotic expansion. Since aconvergentTaylor seriesfits the definition of asymptotic expansion as well, the phrase "asymptotic series" usually implies anon-convergentseries. Despite non-convergence, the asymptotic expansion is useful when truncated to a finite number of terms. The approximation may provide benefits by being more mathematically tractable than the function being expanded, or by an increase in the speed of computation of the expanded function. Typically, the best approximation is given when the series is truncated at the smallest term. This way of optimally truncating an asymptotic expansion is known assuperasymptotics.[2]The error is then typically of the form~ exp(−c/ε)whereεis the expansion parameter. The error is thus beyond all orders in the expansion parameter. It is possible to improve on the superasymptotic error, e.g. by employing resummation methods such asBorel resummationto the divergent tail. Such methods are often referred to ashyperasymptotic approximations. Seeasymptotic analysisandbig O notationfor the notation used in this article. First we define an asymptotic scale, and then give the formal definition of an asymptotic expansion. Ifφn{\displaystyle \ \varphi _{n}\ }is a sequence ofcontinuous functionson some domain, and ifL{\displaystyle \ L\ }is alimit pointof the domain, then the sequence constitutes anasymptotic scaleif for everyn, (L{\displaystyle \ L\ }may be taken to be infinity.) In other words, a sequence of functions is an asymptotic scale if each function in the sequence grows strictly slower (in the limitx→L{\displaystyle \ x\to L\ }) than the preceding function. Iff{\displaystyle \ f\ }is a continuous function on the domain of the asymptotic scale, thenfhas an asymptotic expansion of orderN{\displaystyle \ N\ }with respect to the scale as a formal series if or the weaker condition is satisfied. Here,o{\displaystyle o}is thelittle onotation. If one or the other holds for allN{\displaystyle \ N\ }, then we write[citation needed] In contrast to a convergent series forf{\displaystyle \ f\ }, wherein the series converges for anyfixedx{\displaystyle \ x\ }in the limitN→∞{\displaystyle N\to \infty }, one can think of the asymptotic series as converging forfixedN{\displaystyle \ N\ }in the limitx→L{\displaystyle \ x\to L\ }(withL{\displaystyle \ L\ }possibly infinite). Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of itsdomain of convergence. Thus, for example, one may start with the ordinary series The expression on the left is valid on the entirecomplex planew≠1{\displaystyle w\neq 1}, while the right hand side converges only for|w|<1{\displaystyle |w|<1}. Multiplying bye−w/t{\displaystyle e^{-w/t}}and integrating both sides yields after the substitutionu=w/t{\displaystyle u=w/t}on the right hand side. The integral on the left hand side, understood as aCauchy principal value, can be expressed in terms of theexponential integral. The integral on the right hand side may be recognized as thegamma function. Evaluating both, one obtains the asymptotic expansion Here, the right hand side is clearly not convergent for any non-zero value oft. However, by truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value ofEi⁡(1t){\displaystyle \operatorname {Ei} \left({\tfrac {1}{t}}\right)}for sufficiently smallt. Substitutingx=−1t{\displaystyle x=-{\tfrac {1}{t}}}and noting thatEi⁡(x)=−E1(−x){\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)}results in the asymptotic expansion given earlier in this article. Using integration by parts, we can obtain an explicit formula[3]Ei⁡(z)=ezz(∑k=0nk!zk+en(z)),en(z)≡(n+1)!ze−z∫−∞zettn+2dt{\displaystyle \operatorname {Ei} (z)={\frac {e^{z}}{z}}\left(\sum _{k=0}^{n}{\frac {k!}{z^{k}}}+e_{n}(z)\right),\quad e_{n}(z)\equiv (n+1)!\ ze^{-z}\int _{-\infty }^{z}{\frac {e^{t}}{t^{n+2}}}\,dt}For any fixedz{\displaystyle z}, the absolute value of the error term|en(z)|{\displaystyle |e_{n}(z)|}decreases, then increases. The minimum occurs atn∼|z|{\displaystyle n\sim |z|}, at which point|en(z)|≤2π|z|e−|z|{\displaystyle \vert e_{n}(z)\vert \leq {\sqrt {\frac {2\pi }{\vert z\vert }}}e^{-\vert z\vert }}. This bound is said to be "asymptotics beyond all orders". For a given asymptotic scale{φn(x)}{\displaystyle \{\varphi _{n}(x)\}}the asymptotic expansion of functionf(x){\displaystyle f(x)}is unique.[4]That is the coefficients{an}{\displaystyle \{a_{n}\}}are uniquely determined in the following way:a0=limx→Lf(x)φ0(x)a1=limx→Lf(x)−a0φ0(x)φ1(x)⋮aN=limx→Lf(x)−∑n=0N−1anφn(x)φN(x){\displaystyle {\begin{aligned}a_{0}&=\lim _{x\to L}{\frac {f(x)}{\varphi _{0}(x)}}\\a_{1}&=\lim _{x\to L}{\frac {f(x)-a_{0}\varphi _{0}(x)}{\varphi _{1}(x)}}\\&\;\;\vdots \\a_{N}&=\lim _{x\to L}{\frac {f(x)-\sum _{n=0}^{N-1}a_{n}\varphi _{n}(x)}{\varphi _{N}(x)}}\end{aligned}}}whereL{\displaystyle L}is the limit point of this asymptotic expansion (may be±∞{\displaystyle \pm \infty }). A given functionf(x){\displaystyle f(x)}may have many asymptotic expansions (each with a different asymptotic scale).[4] An asymptotic expansion may be an asymptotic expansion to more than one function.[4]
https://en.wikipedia.org/wiki/Asymptotic_expansion
Incomputer science, analgorithmis said to beasymptotically optimalif, roughly speaking, for large inputs it performsat worsta constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use ofbig-O notation. More formally, an algorithm is asymptotically optimal with respect to a particularresourceif the problem has been proven to requireΩ(f(n))of that resource, and the algorithm has been proven to use onlyO(f(n)). These proofs require an assumption of a particularmodel of computation, i.e., certain restrictions on operations allowable with the input data. As a simple example, it's known that allcomparison sortsrequire at leastΩ(nlogn)comparisons in the average and worst cases.Mergesortandheapsortare comparison sorts which performO(nlogn)comparisons, so they are asymptotically optimal in this sense. If the input data have somea prioriproperties which can be exploited in construction of algorithms, in addition to comparisons, then asymptotically faster algorithms may be possible. For example, if it is known that theNobjects areintegersfrom the range[1,N],then they may be sortedO(N)time, e.g., by thebucket sort. A consequence of an algorithm being asymptotically optimal is that, for large enough inputs, no algorithm can outperform it by more than a constant factor. For this reason, asymptotically optimal algorithms are often seen as the "end of the line" in research, the attaining of a result that cannot be dramatically improved upon. Conversely, if an algorithm is not asymptotically optimal, this implies that as the input grows in size, the algorithm performs increasingly worse than the best possible algorithm. In practice it's useful to find algorithms that perform better, even if they do not enjoy any asymptotic advantage. New algorithms may also present advantages such as better performance on specific inputs, decreased use of resources, or being simpler to describe and implement. Thus asymptotically optimal algorithms are not always the "end of the line". Although asymptotically optimal algorithms are important theoretical results, an asymptotically optimal algorithm might not be used in a number of practical situations: An example of an asymptotically optimal algorithm not used in practice isBernard Chazelle'slinear-timealgorithm fortriangulationof asimple polygon. Another is theresizable arraydata structure published in "Resizable Arrays in Optimal Time and Space",[1]which can index in constant time but on many machines carries a heavy practical penalty compared to ordinary array indexing. Formally, suppose that we have a lower-bound theorem showing that a problem requires Ω(f(n)) time to solve for an instance (input) of sizen(seeBig O notation § Big Omega notationfor the definition of Ω). Then, an algorithm which solves the problem in O(f(n)) time is said to be asymptotically optimal. This can also be expressed using limits: suppose that b(n) is a lower bound on the running time, and a given algorithm takes time t(n). Then the algorithm is asymptotically optimal if: This limit, if it exists, is always at least 1, as t(n) ≥ b(n). Although usually applied to time efficiency, an algorithm can be said to use asymptotically optimal space, random bits, number of processors, or any other resource commonly measured using big-O notation. Sometimes vague or implicit assumptions can make it unclear whether an algorithm is asymptotically optimal. For example, a lower bound theorem might assume a particularabstract machinemodel, as in the case of comparison sorts, or a particular organization of memory. By violating these assumptions, a new algorithm could potentially asymptotically outperform the lower bound and the "asymptotically optimal" algorithms. The nonexistence of an asymptotically optimal algorithm is called speedup.Blum's speedup theoremshows that there exist artificially constructed problems with speedup. However, it is anopen problemwhether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is anO(nα(n)){\displaystyle O(n\alpha (n))}algorithm for findingminimum spanning trees, whereα(n){\displaystyle \alpha (n)}is the very slowly growing inverse of theAckermann function, but the best known lower bound is the trivialΩ(n){\displaystyle \Omega (n)}. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way. Coppersmith and Winograd (1982) proved that matrix multiplication has a weak form of speed-up among a restricted class of algorithms (Strassen-type bilinear identities with lambda-computation).
https://en.wikipedia.org/wiki/Asymptotically_optimal_algorithm
Theorder in probabilitynotation is used inprobability theoryandstatistical theoryin direct parallel to thebigOnotationthat is standard inmathematics. Where the bigOnotation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals withconvergence of sets of random variables, where convergence is in the sense ofconvergence in probability.[1] For a set of random variablesXnand corresponding set of constantsan(both indexed byn, which need not be discrete), the notation means that the set of valuesXn/anconverges to zero in probability asnapproaches an appropriate limit. Equivalently,Xn= op(an) can be written asXn/an= op(1), i.e. for every positive ε.[2] The notation means that the set of valuesXn/anis stochastically bounded. That is, for anyε> 0, there exists a finiteM> 0 and a finiteN> 0 such that The difference between the definitions is subtle. If one uses the definition of the limit, one gets: The difference lies in theδ{\displaystyle \delta }: for stochastic boundedness, it suffices that there exists one (arbitrary large)δ{\displaystyle \delta }to satisfy the inequality, andδ{\displaystyle \delta }is allowed to be dependent onε{\displaystyle \varepsilon }(hence theδε{\displaystyle \delta _{\varepsilon }}). On the other hand, for convergence, the statement has to hold not only for one, but for any (arbitrary small)δ{\displaystyle \delta }. In a sense, this means that the sequence must be bounded, with a bound that gets smaller as the sample size increases. This suggests that if a sequence isop(1){\displaystyle o_{p}(1)}, then it isOp(1){\displaystyle O_{p}(1)}, i.e. convergence in probability implies stochastic boundedness. But the reverse does not hold. If(Xn){\displaystyle (X_{n})}is a stochastic sequence such that each element has finite variance, then (see Theorem 14.4-1 in Bishop et al.) If, moreover,an−2var⁡(Xn)=var⁡(an−1Xn){\displaystyle a_{n}^{-2}\operatorname {var} (X_{n})=\operatorname {var} (a_{n}^{-1}X_{n})}is a null sequence for a sequence(an){\displaystyle (a_{n})}of real numbers, thenan−1(Xn−E(Xn)){\displaystyle a_{n}^{-1}(X_{n}-E(X_{n}))}converges to zero in probability byChebyshev's inequality, so
https://en.wikipedia.org/wiki/Big_O_in_probability_notation
Inmathematics, thelimit inferiorandlimit superiorof asequencecan be thought of aslimiting(that is, eventual and extreme) bounds on the sequence. They can be thought of in a similar fashion for afunction(seelimit of a function). For aset, they are theinfimum and supremumof the set'slimit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also calledinfimum limit,limit infimum,liminf,inferior limit,lower limit, orinner limit; limit superior is also known assupremum limit,limit supremum,limsup,superior limit,upper limit, orouter limit. The limit inferior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim infn→∞xnorlim_n→∞⁡xn,{\displaystyle \liminf _{n\to \infty }x_{n}\quad {\text{or}}\quad \varliminf _{n\to \infty }x_{n},}and the limit superior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim supn→∞xnorlim¯n→∞⁡xn.{\displaystyle \limsup _{n\to \infty }x_{n}\quad {\text{or}}\quad \varlimsup _{n\to \infty }x_{n}.} Thelimit inferiorof a sequence (xn) is defined bylim infn→∞xn:=limn→∞(infm≥nxm){\displaystyle \liminf _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\inf _{m\geq n}x_{m}{\Big )}}orlim infn→∞xn:=supn≥0infm≥nxm=sup{inf{xm:m≥n}:n≥0}.{\displaystyle \liminf _{n\to \infty }x_{n}:=\sup _{n\geq 0}\,\inf _{m\geq n}x_{m}=\sup \,\{\,\inf \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Similarly, thelimit superiorof (xn) is defined bylim supn→∞xn:=limn→∞(supm≥nxm){\displaystyle \limsup _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\sup _{m\geq n}x_{m}{\Big )}}orlim supn→∞xn:=infn≥0supm≥nxm=inf{sup{xm:m≥n}:n≥0}.{\displaystyle \limsup _{n\to \infty }x_{n}:=\inf _{n\geq 0}\,\sup _{m\geq n}x_{m}=\inf \,\{\,\sup \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Alternatively, the notationslim_n→∞⁡xn:=lim infn→∞xn{\displaystyle \varliminf _{n\to \infty }x_{n}:=\liminf _{n\to \infty }x_{n}}andlim¯n→∞⁡xn:=lim supn→∞xn{\displaystyle \varlimsup _{n\to \infty }x_{n}:=\limsup _{n\to \infty }x_{n}}are sometimes used. The limits superior and inferior can equivalently be defined using the concept of subsequential limits of the sequence(xn){\displaystyle (x_{n})}.[1]An elementξ{\displaystyle \xi }of theextended real numbersR¯{\displaystyle {\overline {\mathbb {R} }}}is asubsequential limitof(xn){\displaystyle (x_{n})}if there exists a strictly increasing sequence ofnatural numbers(nk){\displaystyle (n_{k})}such thatξ=limk→∞xnk{\displaystyle \xi =\lim _{k\to \infty }x_{n_{k}}}. IfE⊆R¯{\displaystyle E\subseteq {\overline {\mathbb {R} }}}is the set of all subsequential limits of(xn){\displaystyle (x_{n})}, then and If the terms in the sequence arereal numbers, the limit superior and limit inferior always exist, as the real numbers together with ±∞ (i.e. theextended real number line) arecomplete. More generally, these definitions make sense in anypartially ordered set, provided thesupremaandinfimaexist, such as in acomplete lattice. Whenever the ordinary limit exists, the limit inferior and limit superior are both equal to it; therefore, each can be considered a generalization of the ordinary limit which is primarily interesting in cases where the limit doesnotexist. Whenever lim infxnand lim supxnboth exist, we have The limits inferior and superior are related tobig-O notationin that they bound a sequence only "in the limit"; the sequence may exceed the bound. However, with big-O notation the sequence can only exceed the bound in a finite prefix of the sequence, whereas the limit superior of a sequence like e−nmay actually be less than all elements of the sequence. The only promise made is that some tail of the sequence can be bounded above by the limit superior plus an arbitrarily small positive constant, and bounded below by the limit inferior minus an arbitrarily small positive constant. The limit superior and limit inferior of a sequence are a special case of those of a function (see below). Inmathematical analysis, limit superior and limit inferior are important tools for studying sequences ofreal numbers. Since the supremum and infimum of anunbounded setof real numbers may not exist (the reals are not a complete lattice), it is convenient to consider sequences in theaffinely extended real number system: we add the positive and negative infinities to the real line to give the completetotally ordered set[−∞,∞], which is a complete lattice. Consider a sequence(xn){\displaystyle (x_{n})}consisting of real numbers. Assume that the limit superior and limit inferior are real numbers (so, not infinite). The relationship of limit inferior and limit superior for sequences of real numbers is as follows:lim supn→∞(−xn)=−lim infn→∞xn{\displaystyle \limsup _{n\to \infty }\left(-x_{n}\right)=-\liminf _{n\to \infty }x_{n}} As mentioned earlier, it is convenient to extendR{\displaystyle \mathbb {R} }to[−∞,∞].{\displaystyle [-\infty ,\infty ].}Then,(xn){\displaystyle \left(x_{n}\right)}in[−∞,∞]{\displaystyle [-\infty ,\infty ]}convergesif and only iflim infn→∞xn=lim supn→∞xn{\displaystyle \liminf _{n\to \infty }x_{n}=\limsup _{n\to \infty }x_{n}}in which caselimn→∞xn{\displaystyle \lim _{n\to \infty }x_{n}}is equal to their common value. (Note that when working just inR,{\displaystyle \mathbb {R} ,}convergence to−∞{\displaystyle -\infty }or∞{\displaystyle \infty }would not be considered as convergence.) Since the limit inferior is at most the limit superior, the following conditions holdlim infn→∞xn=∞implieslimn→∞xn=∞,lim supn→∞xn=−∞implieslimn→∞xn=−∞.{\displaystyle {\begin{alignedat}{4}\liminf _{n\to \infty }x_{n}&=\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=\infty ,\\[0.3ex]\limsup _{n\to \infty }x_{n}&=-\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=-\infty .\end{alignedat}}} IfI=lim infn→∞xn{\displaystyle I=\liminf _{n\to \infty }x_{n}}andS=lim supn→∞xn{\displaystyle S=\limsup _{n\to \infty }x_{n}}, then the interval[I,S]{\displaystyle [I,S]}need not contain any of the numbersxn,{\displaystyle x_{n},}but every slight enlargement[I−ϵ,S+ϵ],{\displaystyle [I-\epsilon ,S+\epsilon ],}for arbitrarily smallϵ>0,{\displaystyle \epsilon >0,}will containxn{\displaystyle x_{n}}for all but finitely many indicesn.{\displaystyle n.}In fact, the interval[I,S]{\displaystyle [I,S]}is the smallest closed interval with this property. We can formalize this property like this: there existsubsequencesxkn{\displaystyle x_{k_{n}}}andxhn{\displaystyle x_{h_{n}}}ofxn{\displaystyle x_{n}}(wherekn{\displaystyle k_{n}}andhn{\displaystyle h_{n}}are increasing) for which we havelim infn→∞xn+ϵ>xhnxkn>lim supn→∞xn−ϵ{\displaystyle \liminf _{n\to \infty }x_{n}+\epsilon >x_{h_{n}}\;\;\;\;\;\;\;\;\;x_{k_{n}}>\limsup _{n\to \infty }x_{n}-\epsilon } On the other hand, there exists an0∈N{\displaystyle n_{0}\in \mathbb {N} }so that for alln≥n0{\displaystyle n\geq n_{0}}lim infn→∞xn−ϵ<xn<lim supn→∞xn+ϵ{\displaystyle \liminf _{n\to \infty }x_{n}-\epsilon <x_{n}<\limsup _{n\to \infty }x_{n}+\epsilon } To recapitulate: Conversely, it can also be shown that: In general,infnxn≤lim infn→∞xn≤lim supn→∞xn≤supnxn.{\displaystyle \inf _{n}x_{n}\leq \liminf _{n\to \infty }x_{n}\leq \limsup _{n\to \infty }x_{n}\leq \sup _{n}x_{n}.}The liminf and limsup of a sequence are respectively the smallest and greatestcluster points.[3] Analogously, the limit inferior satisfiessuperadditivity:lim infn→∞(an+bn)≥lim infn→∞an+lim infn→∞bn.{\displaystyle \liminf _{n\to \infty }\,(a_{n}+b_{n})\geq \liminf _{n\to \infty }a_{n}+\ \liminf _{n\to \infty }b_{n}.}In the particular case that one of the sequences actually converges, sayan→a,{\displaystyle a_{n}\to a,}then the inequalities above become equalities (withlim supn→∞an{\displaystyle \limsup _{n\to \infty }a_{n}}orlim infn→∞an{\displaystyle \liminf _{n\to \infty }a_{n}}being replaced bya{\displaystyle a}). hold whenever the right-hand side is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Iflimn→∞an=A{\displaystyle \lim _{n\to \infty }a_{n}=A}exists (including the caseA=+∞{\displaystyle A=+\infty }) andB=lim supn→∞bn,{\displaystyle B=\limsup _{n\to \infty }b_{n},}thenlim supn→∞(anbn)=AB{\displaystyle \limsup _{n\to \infty }\left(a_{n}b_{n}\right)=AB}provided thatAB{\displaystyle AB}is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Assume that a function is defined from asubsetof the real numbers to the real numbers. As in the case for sequences, the limit inferior and limit superior are always well-defined if we allow the values +∞ and −∞; in fact, if both agree then the limit exists and is equal to their common value (again possibly including the infinities). For example, givenf(x)=sin⁡(1/x){\displaystyle f(x)=\sin(1/x)}, we havelim supx→0f(x)=1{\displaystyle \limsup _{x\to 0}f(x)=1}andlim infx→0f(x)=−1{\displaystyle \liminf _{x\to 0}f(x)=-1}. The difference between the two is a rough measure of how "wildly" the function oscillates, and in observation of this fact, it is called theoscillationoffat 0. This idea of oscillation is sufficient to, for example, characterizeRiemann-integrablefunctions ascontinuousexcept on a set ofmeasure zero.[5]Note that points of nonzero oscillation (i.e., points at whichfis "badly behaved") are discontinuities which, unless they make up a set of zero, are confined to a negligible set. There is a notion of limsup and liminf for functions defined on ametric spacewhose relationship to limits of real-valued functions mirrors that of the relation between the limsup, liminf, and the limit of a real sequence. Take a metric spaceX{\displaystyle X}, a subspaceE{\displaystyle E}contained inX{\displaystyle X}, and a functionf:E→R{\displaystyle f:E\to \mathbb {R} }. Define, for anylimit pointa{\displaystyle a}ofE{\displaystyle E}, lim supx→af(x)=limε→0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and lim infx→af(x)=limε→0(inf{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \liminf _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)} whereB(a,ε){\displaystyle B(a,\varepsilon )}denotes themetric ballof radiusε{\displaystyle \varepsilon }abouta{\displaystyle a}. Note that asεshrinks, the supremum of the function over the ball isnon-increasing(strictly decreasing or remaining the same), so we have lim supx→af(x)=infε>0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\inf _{\varepsilon >0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and similarlylim infx→af(x)=supε>0(inf{f(x):x∈E∩B(a,ε)∖{a}}).{\displaystyle \liminf _{x\to a}f(x)=\sup _{\varepsilon >0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right).} This finally motivates the definitions for generaltopological spaces. TakeX,Eandaas before, but now letXbe a topological space. In this case, we replace metric balls withneighborhoods: (there is a way to write the formula using "lim" usingnetsand theneighborhood filter). This version is often useful in discussions ofsemi-continuitywhich crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure ofNin [−∞,∞], theextended real number line, isN∪ {∞}.) Thepower set℘(X) of asetXis acomplete latticethat is ordered byset inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subsetYofXis bounded above byXand below by theempty set∅ because ∅ ⊆Y⊆X. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets ofX). There are two common ways to define the limit of sequences of sets. In both cases: The difference between the two definitions involves how thetopology(i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when thediscrete metricis used to induce the topology onX. A sequence of sets in ametrizable spaceX{\displaystyle X}approaches a limiting set when the elements of each member of the sequence approach the elements of the limiting set. In particular, if(Xn){\displaystyle (X_{n})}is a sequence of subsets ofX,{\displaystyle X,}then: The limitlimXn{\displaystyle \lim X_{n}}exists if and only iflim infXn{\displaystyle \liminf X_{n}}andlim supXn{\displaystyle \limsup X_{n}}agree, in which caselimXn=lim supXn=lim infXn.{\displaystyle \lim X_{n}=\limsup X_{n}=\liminf X_{n}.}[6]The outer and inner limits should not be confused with theset-theoretic limitssuperior and inferior, as the latter sets are not sensitive to the topological structure of the space. This is the definition used inmeasure theoryandprobability. Further discussion and examples from the set-theoretic point of view, as opposed to the topological point of view discussed below, are atset-theoretic limit. By this definition, a sequence of sets approaches a limiting set when the limiting set includes elements which are in all except finitely many sets of the sequenceanddoes not include elements which are in all except finitely many complements of sets of the sequence. That is, this case specializes the general definition when the topology on setXis induced from thediscrete metric. Specifically, for pointsx,y∈X, the discrete metric is defined by under which a sequence of points (xk) converges to pointx∈Xif and only ifxk=xfor all but finitely manyk. Therefore,if the limit set existsit contains the points and only the points which are in all except finitely many of the sets of the sequence. Since convergence in the discrete metric is the strictest form of convergence (i.e., requires the most), this definition of a limit set is the strictest possible. If (Xn) is a sequence of subsets ofX, then the following always exist: Observe thatx∈ lim supXnif and only ifx∉ lim infXnc. In this sense, the sequence has a limit so long as every point inXeither appears in all except finitely manyXnor appears in all except finitely manyXnc.[7] Using the standard parlance of set theory,set inclusionprovides apartial orderingon the collection of all subsets ofXthat allows set intersection to generate a greatest lower bound and set union to generate a least upper bound. Thus, the infimum ormeetof a collection of subsets is the greatest lower bound while the supremum orjoinis the least upper bound. In this context, the inner limit, lim infXn, is thelargest meeting of tailsof the sequence, and the outer limit, lim supXn, is thesmallest joining of tailsof the sequence. The following makes this precise. The following are several set convergence examples. They have been broken into sections with respect to the metric used to induce the topology on setX. The above definitions are inadequate for many technical applications. In fact, the definitions above are specializations of the following definitions. The limit inferior of a setX⊆Yis theinfimumof all of thelimit pointsof the set. That is, Similarly, the limit superior ofXis thesupremumof all of the limit points of the set. That is, Note that the setXneeds to be defined as a subset of apartially ordered setYthat is also atopological spacein order for these definitions to make sense. Moreover, it has to be acomplete latticeso that the suprema and infima always exist. In that case every set has a limit superior and a limit inferior. Also note that the limit inferior and the limit superior of a set do not have to be elements of the set. Take atopological spaceXand afilter baseBin that space. The set of allcluster pointsfor that filter base is given by whereB¯0{\displaystyle {\overline {B}}_{0}}is theclosureofB0{\displaystyle B_{0}}. This is clearly aclosed setand is similar to the set of limit points of a set. Assume thatXis also apartially ordered set. The limit superior of the filter baseBis defined as when that supremum exists. WhenXhas atotal order, is acomplete latticeand has theorder topology, Similarly, the limit inferior of the filter baseBis defined as when that infimum exists; ifXis totally ordered, is a complete lattice, and has the order topology, then If the limit inferior and limit superior agree, then there must be exactly one cluster point and the limit of the filter base is equal to this unique cluster point. Note that filter bases are generalizations ofnets, which are generalizations ofsequences. Therefore, these definitions give the limit inferior andlimit superiorof any net (and thus any sequence) as well. For example, take topological spaceX{\displaystyle X}and the net(xα)α∈A{\displaystyle (x_{\alpha })_{\alpha \in A}}, where(A,≤){\displaystyle (A,{\leq })}is adirected setandxα∈X{\displaystyle x_{\alpha }\in X}for allα∈A{\displaystyle \alpha \in A}. The filter base ("of tails") generated by this net isB{\displaystyle B}defined by Therefore, the limit inferior and limit superior of the net are equal to the limit superior and limit inferior ofB{\displaystyle B}respectively. Similarly, for topological spaceX{\displaystyle X}, take the sequence(xn){\displaystyle (x_{n})}wherexn∈X{\displaystyle x_{n}\in X}for anyn∈N{\displaystyle n\in \mathbb {N} }. The filter base ("of tails") generated by this sequence isC{\displaystyle C}defined by Therefore, the limit inferior and limit superior of the sequence are equal to the limit superior and limit inferior ofC{\displaystyle C}respectively.
https://en.wikipedia.org/wiki/Limit_inferior_and_limit_superior
In theanalysis of algorithms, themaster theorem for divide-and-conquer recurrencesprovides anasymptotic analysisfor manyrecurrence relationsthat occur in theanalysisofdivide-and-conquer algorithms. The approach was first presented byJon Bentley,Dorothea Blostein(née Haken), andJames B. Saxein 1980, where it was described as a "unifying method" for solving such recurrences.[1]The name "master theorem" was popularized by the widely used algorithms textbookIntroduction to AlgorithmsbyCormen,Leiserson,Rivest, andStein. Not all recurrence relations can be solved by this theorem; its generalizations include theAkra–Bazzi method. Consider a problem that can be solved using arecursive algorithmsuch as the following: The above algorithm divides the problem into a number (a) of subproblems recursively, each subproblem being of sizen/b. The factor by which the size of subproblems is reduced (b) need not, in general, be the same as the number of subproblems (a). Its solutiontreehas a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less thank) that do not recurse. The above example would haveachild nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblemnpassed to that instance of the recursive call and given byf(n){\displaystyle f(n)}. The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree. The runtime of an algorithm such as thepabove on an input of sizen, usually denotedT(n){\displaystyle T(n)}, can be expressed by therecurrence relation wheref(n){\displaystyle f(n)}is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done.[2]The master theorem allows many recurrence relations of this form to be converted toΘ-notationdirectly, without doing an expansion of the recursive relation. The master theorem always yieldsasymptotically tight boundsto recurrences fromdivide and conquer algorithmsthat partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. IfT(n){\displaystyle T(n)}denotes the total time for the algorithm on an input of sizen{\displaystyle n}, andf(n){\displaystyle f(n)}denotes the amount of time taken at the top level of the recurrence then the time can be expressed by arecurrence relationthat takes the form: Heren{\displaystyle n}is the size of an input problem,a{\displaystyle a}is the number of subproblems in the recursion, andb{\displaystyle b}is the factor by which the subproblem size is reduced in each recursive call (b>1{\displaystyle b>1}). Crucially,a{\displaystyle a}andb{\displaystyle b}must not depend onn{\displaystyle n}. The theorem below also assumes that, as a base case for the recurrence,T(n)=Θ(1){\displaystyle T(n)=\Theta (1)}whenn{\displaystyle n}is less than some boundκ>0{\displaystyle \kappa >0}, the smallest input size that will lead to a recursive call. Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problemf(n){\displaystyle f(n)}relates to thecritical exponentccrit=logb⁡a{\displaystyle c_{\operatorname {crit} }=\log _{b}a}. (The table below uses standardbig O notation). Throughout,(log⁡n)k{\displaystyle (\log n)^{k}}is used for clarity, though in textbooks this is usually renderedlogk⁡n{\displaystyle \log ^{k}n}. i.e. the recursion tree is leaf-heavy. (upper-bounded by a lesser exponent polynomial) (The splitting term does not appear; the recursive tree structure dominates.) (rangebound by the critical-exponent polynomial, times zero or more optionallog{\displaystyle \log }s) (The bound is the splitting term, where the log is augmented by a single power.) Ifb=a2{\displaystyle b=a^{2}}andf(n)=Θ(n1/2log⁡n){\displaystyle f(n)=\Theta (n^{1/2}\log n)}, thenT(n)=Θ(n1/2(log⁡n)2){\displaystyle T(n)=\Theta (n^{1/2}(\log n)^{2})}. i.e. the recursion tree is root-heavy. (lower-bounded by a greater-exponent polynomial) then the total is dominated by the splitting termf(n){\displaystyle f(n)}: A useful extension of Case 2 handles all values ofk{\displaystyle k}:[3] (The bound is the splitting term, where the log is augmented by a single power.) (The bound is the splitting term, where the log reciprocal is replaced by an iterated log.) (The bound is the splitting term, where the log disappears.) As one can see from the formula above: Next, we see if we satisfy the case 1 condition: It follows from the first case of the master theorem that (This result is confirmed by the exact solution of the recurrence relation, which isT(n)=1001n3−1000n2{\displaystyle T(n)=1001n^{3}-1000n^{2}}, assumingT(1)=1{\displaystyle T(1)=1}). T(n)=2T(n2)+10n{\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+10n} As we can see in the formula above the variables get the following values: Next, we see if we satisfy the case 2 condition: So it follows from the second case of the master theorem: Thus the given recurrence relationT(n){\displaystyle T(n)}was inΘ(nlog⁡n){\displaystyle \Theta (n\log n)}. (This result is confirmed by the exact solution of the recurrence relation, which isT(n)=n+10nlog2⁡n{\displaystyle T(n)=n+10n\log _{2}n}, assumingT(1)=1{\displaystyle T(1)=1}). As we can see in the formula above the variables get the following values: Next, we see if we satisfy the case 3 condition: The regularity condition also holds: So it follows from the third case of the master theorem: Thus the given recurrence relationT(n){\displaystyle T(n)}was inΘ(n2){\displaystyle \Theta (n^{2})}, that complies with thef(n){\displaystyle f(n)}of the original formula. (This result is confirmed by the exact solution of the recurrence relation, which isT(n)=2n2−n{\displaystyle T(n)=2n^{2}-n}, assumingT(1)=1{\displaystyle T(1)=1}.) The following equations cannot be solved using the master theorem:[4] In the second inadmissible example above, the difference betweenf(n){\displaystyle f(n)}andnlogb⁡a{\displaystyle n^{\log _{b}a}}can be expressed with the ratiof(n)nlogb⁡a=n/log⁡nnlog2⁡2=nnlog⁡n=1log⁡n{\displaystyle {\frac {f(n)}{n^{\log _{b}a}}}={\frac {n/\log n}{n^{\log _{2}2}}}={\frac {n}{n\log n}}={\frac {1}{\log n}}}. It is clear that1log⁡n<nϵ{\displaystyle {\frac {1}{\log n}}<n^{\epsilon }}for any constantϵ>0{\displaystyle \epsilon >0}. Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solutionT(n)=Θ(nlog⁡log⁡n){\displaystyle T(n)=\Theta (n\log \log n)}.
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Inmathematics, in the area ofcomplex analysis,Nachbin's theorem(named afterLeopoldo Nachbin) is a result used to establish bounds on the growth rates foranalytic functions. In particular, Nachbin's theorem may be used to give the domain of convergence of thegeneralized Borel transform, also calledNachbin summation. This article provides a brief review of growth rates, including the idea of afunction of exponential type. Classification of growth rates based on type help provide a finer tool thanbig OorLandau notation, since a number of theorems about the analytic structure of the bounded function and itsintegral transformscan be stated. A functionf(z){\displaystyle f(z)}defined on thecomplex planeis said to be of exponential type if there exist constantsM{\displaystyle M}andα{\displaystyle \alpha }such that in the limit ofr→∞{\displaystyle r\to \infty }. Here, thecomplex variablez{\displaystyle z}was written asz=reiθ{\displaystyle z=re^{i\theta }}to emphasize that the limit must hold in all directionsθ{\displaystyle \theta }. Lettingα{\displaystyle \alpha }stand for theinfimumof all suchα{\displaystyle \alpha }, one then says that the functionf{\displaystyle f}is ofexponential typeα{\displaystyle \alpha }. For example, letf(z)=sin⁡(πz){\displaystyle f(z)=\sin(\pi z)}. Then one says thatsin⁡(πz){\displaystyle \sin(\pi z)}is of exponential typeπ{\displaystyle \pi }, sinceπ{\displaystyle \pi }is the smallest number that bounds the growth ofsin⁡(πz){\displaystyle \sin(\pi z)}along the imaginary axis. So, for this example,Carlson's theoremcannot apply, as it requires functions of exponential type less thanπ{\displaystyle \pi }. Additional function types may be defined for other bounding functions besides the exponential function. In general, a functionΨ(t){\displaystyle \Psi (t)}is acomparison functionif it has a series withΨn>0{\displaystyle \Psi _{n}>0}for alln{\displaystyle n}, and Comparison functions are necessarilyentire, which follows from theratio test. IfΨ(t){\displaystyle \Psi (t)}is such a comparison function, one then says thatf{\displaystyle f}is ofΨ{\displaystyle \Psi }-type if there exist constantsM{\displaystyle M}andτ{\displaystyle \tau }such that asr→∞{\displaystyle r\to \infty }. Ifτ{\displaystyle \tau }is the infimum of all suchτ{\displaystyle \tau }one says thatf{\displaystyle f}is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }. Nachbin's theorem states that a functionf(z){\displaystyle f(z)}with the series is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }if and only if This is naturally connected to theroot testand can be considered a relative of theCauchy–Hadamard theorem. Nachbin's theorem has immediate applications inCauchy theorem-like situations, and forintegral transforms. For example, thegeneralized Borel transformis given by Iff{\displaystyle f}is ofΨ{\displaystyle \Psi }-typeτ{\displaystyle \tau }, then the exterior of the domain of convergence ofF(w){\displaystyle F(w)}, and all of its singular points, are contained within the disk Furthermore, one has where thecontour of integrationγ encircles the disk|w|≤τ{\displaystyle |w|\leq \tau }. This generalizes the usualBorel transformfor functions of exponential type, whereΨ(t)=et{\displaystyle \Psi (t)=e^{t}}. The integral form for the generalized Borel transform follows as well. Letα(t){\displaystyle \alpha (t)}be a function whose first derivative is bounded on the interval[0,∞){\displaystyle [0,\infty )}and that satisfies the defining equation wheredα(t)=α′(t)dt{\displaystyle d\alpha (t)=\alpha ^{\prime }(t)\,dt}. Then the integral form of the generalized Borel transform is The ordinary Borel transform is regained by settingα(t)=−e−t{\displaystyle \alpha (t)=-e^{-t}}. Note that the integral form of the Borel transform is theLaplace transform. Nachbin summation can be used to sum divergent series thatBorel summationdoes not, for instance toasymptotically solveintegral equations of the form: whereg(s)=∑n=0∞ans−n{\textstyle g(s)=\sum _{n=0}^{\infty }a_{n}s^{-n}},f(t){\displaystyle f(t)}may or may not be of exponential type, and the kernelK(u){\displaystyle K(u)}has aMellin transform. The solution can be obtained using Nachbin summation asf(x)=∑n=0∞anM(n+1)xn{\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {a_{n}}{M(n+1)}}x^{n}}with thean{\displaystyle a_{n}}fromg(s){\displaystyle g(s)}and withM(n){\displaystyle M(n)}the Mellin transform ofK(u){\displaystyle K(u)}. An example of this is the Gram seriesπ(x)≈1+∑n=1∞logn⁡(x)n⋅n!ζ(n+1).{\displaystyle \pi (x)\approx 1+\sum _{n=1}^{\infty }{\frac {\log ^{n}(x)}{n\cdot n!\zeta (n+1)}}.} In some cases as an extra condition we require∫0∞K(t)tndt{\displaystyle \int _{0}^{\infty }K(t)t^{n}\,dt}to be finite and nonzero forn=0,1,2,3,....{\displaystyle n=0,1,2,3,....} Collections of functions of exponential typeτ{\displaystyle \tau }can form acompleteuniform space, namely aFréchet space, by thetopologyinduced by the countable family ofnorms
https://en.wikipedia.org/wiki/Nachbin%27s_theorem
Complex analysis, traditionally known as thetheory of functions of a complex variable, is the branch ofmathematical analysisthat investigatesfunctionsofcomplex numbers. It is helpful in many branches of mathematics, includingalgebraic geometry,number theory,analytic combinatorics, andapplied mathematics, as well as inphysics, including the branches ofhydrodynamics,thermodynamics,quantum mechanics, andtwistor theory. By extension, use of complex analysis also has applications in engineering fields such asnuclear,aerospace,mechanicalandelectrical engineering.[1] As adifferentiable functionof a complex variable is equal to thesum functiongiven by itsTaylor series(that is, it isanalytic), complex analysis is particularly concerned withanalytic functionsof a complex variable, that is,holomorphic functions. The concept can be extended tofunctions of several complex variables. Complex analysis is contrasted withreal analysis, which deals with the study ofreal numbersandfunctions of a real variable. Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers includeEuler,Gauss,Riemann,Cauchy,Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory ofconformal mappings, has many physical applications and is also used throughoutanalytic number theory. In modern times, it has become very popular through a new boost fromcomplex dynamicsand the pictures offractalsproduced by iteratingholomorphic functions. Another important application of complex analysis is instring theorywhich examines conformal invariants inquantum field theory. A complex function is afunctionfromcomplex numbersto complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as adomainand the complex numbers as acodomain. Complex functions are generally assumed to have a domain that contains a nonemptyopen subsetof thecomplex plane. For any complex function, the valuesz{\displaystyle z}from the domain and their imagesf(z){\displaystyle f(z)}in the range may be separated intorealandimaginaryparts: wherex,y,u(x,y),v(x,y){\displaystyle x,y,u(x,y),v(x,y)}are all real-valued. In other words, a complex functionf:C→C{\displaystyle f:\mathbb {C} \to \mathbb {C} }may be decomposed into i.e., into two real-valued functions (u{\displaystyle u},v{\displaystyle v}) of two real variables (x{\displaystyle x},y{\displaystyle y}). Similarly, any complex-valued functionfon an arbitrarysetX(isisomorphicto, and therefore, in that sense, it) can be considered as anordered pairof tworeal-valued functions:(Ref, Imf)or, alternatively, as avector-valued functionfromXintoR2.{\displaystyle \mathbb {R} ^{2}.} Some properties of complex-valued functions (such ascontinuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such asdifferentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, everydifferentiable complex functionisanalytic(see next section), and two differentiable functions that are equal in aneighborhoodof a point are equal on the intersection of their domain (if the domains areconnected). The latter property is the basis of the principle ofanalytic continuationwhich allows extending every realanalytic functionin a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number ofcurve arcsremoved. Many basic andspecialcomplex functions are defined in this way, including thecomplex exponential function,complex logarithm functions, andtrigonometric functions. Complex functions that aredifferentiableat every point of anopen subsetΩ{\displaystyle \Omega }of the complex plane are said to beholomorphic onΩ{\displaystyle \Omega }.In the context of complex analysis, the derivative off{\displaystyle f}atz0{\displaystyle z_{0}}is defined to be[2] Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approachz0{\displaystyle z_{0}}in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions areinfinitely differentiable, whereas the existence of thenth derivative need not imply the existence of the (n+ 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition ofanalyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic onΩ{\displaystyle \Omega }can be approximated arbitrarily well by polynomials in some neighborhood of every point inΩ{\displaystyle \Omega }. This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that arenowhereanalytic; seeNon-analytic smooth function § A smooth function which is nowhere real analytic. Most elementary functions, including theexponential function, thetrigonometric functions, and allpolynomial functions, extended appropriately to complex arguments as functionsC→C{\displaystyle \mathbb {C} \to \mathbb {C} },are holomorphic over the entire complex plane, making thementire functions, while rational functionsp/q{\displaystyle p/q}, wherepandqare polynomials, are holomorphic on domains that exclude points whereqis zero. Such functions that are holomorphic everywhere except a set of isolated points are known asmeromorphic functions. On the other hand, the functionsz↦ℜ(z){\displaystyle z\mapsto \Re (z)},z↦|z|{\displaystyle z\mapsto |z|},andz↦z¯{\displaystyle z\mapsto {\bar {z}}}are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below). An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as theCauchy–Riemann conditions. Iff:C→C{\displaystyle f:\mathbb {C} \to \mathbb {C} }, defined byf(z)=f(x+iy)=u(x,y)+iv(x,y){\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)},wherex,y,u(x,y),v(x,y)∈R{\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} },is holomorphic on aregionΩ{\displaystyle \Omega },then for allz0∈Ω{\displaystyle z_{0}\in \Omega }, In terms of the real and imaginary parts of the function,uandv, this is equivalent to the pair of equationsux=vy{\displaystyle u_{x}=v_{y}}anduy=−vx{\displaystyle u_{y}=-v_{x}}, where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (seeLooman–Menchoff theorem). Holomorphic functions exhibit some remarkable features. For instance,Picard's theoremasserts that the range of an entire function can take only three possible forms:C{\displaystyle \mathbb {C} },C∖{z0}{\displaystyle \mathbb {C} \setminus \{z_{0}\}},or{z0}{\displaystyle \{z_{0}\}}for somez0∈C{\displaystyle z_{0}\in \mathbb {C} }.In other words, if two distinct complex numbersz{\displaystyle z}andw{\displaystyle w}are not in the range of an entire functionf{\displaystyle f},thenf{\displaystyle f}is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset. Inmathematics, aconformal mapis afunctionthat locally preservesangles, but not necessarily lengths. More formally, letU{\displaystyle U}andV{\displaystyle V}be open subsets ofRn{\displaystyle \mathbb {R} ^{n}}. A functionf:U→V{\displaystyle f:U\to V}is called conformal (or angle-preserving) at a pointu0∈U{\displaystyle u_{0}\in U}if it preserves angles between directedcurvesthroughu0{\displaystyle u_{0}}, as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size orcurvature. The conformal property may be described in terms of theJacobianderivative matrix of acoordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times arotation matrix(orthogonalwith determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.[3] For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertiblecomplex analyticfunctions. In three and higher dimensions,Liouville's theoremsharply limits the conformal mappings to a few types. One of the central tools in complex analysis is theline integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by theCauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown inCauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory ofresiduesamong others is applicable (seemethods of contour integration). A "pole" (orisolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerfulresidue theorem. The remarkable behavior of holomorphic functions near essential singularities is described byPicard's theorem. Functions that have only poles but noessential singularitiesare calledmeromorphic.Laurent seriesare the complex-valued equivalent toTaylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. Abounded functionthat is holomorphic in the entire complex plane must be constant; this isLiouville's theorem. It can be used to provide a natural and short proof for thefundamental theorem of algebrawhich states that thefieldof complex numbers isalgebraically closed. If a function is holomorphic throughout aconnecteddomain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to beanalytically continuedfrom its values on the smaller domain. This allows the extension of the definition of functions, such as theRiemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of thenatural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as aRiemann surface. All this refers to complex analysis in one variable. There is also a very rich theory ofcomplex analysis in more than one complex dimensionin which the analytic properties such aspower seriesexpansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such asconformality) do not carry over. TheRiemann mapping theoremabout the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions. A major application of certaincomplex spacesis inquantum mechanicsaswave functions.
https://en.wikipedia.org/wiki/Complex_analytic
Inmathematics, anintegral transformis a type oftransformthat maps afunctionfrom its originalfunction spaceinto another function space viaintegration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using theinverse transform. An integral transform is anytransformT{\displaystyle T}of the following form: The input of this transform is afunctionf{\displaystyle f}, and the output is another functionTf{\displaystyle Tf}. An integral transform is a particular kind of mathematicaloperator. There are numerous useful integral transforms. Each is specified by a choice of the functionK{\displaystyle K}of twovariables, that is called thekernelornucleusof the transform. Some kernels have an associatedinverse kernelK−1(u,t){\displaystyle K^{-1}(u,t)}which (roughly speaking) yields an inverse transform: Asymmetric kernelis one that is unchanged when the two variables are permuted; it is a kernel functionK{\displaystyle K}such thatK(t,u)=K(u,t){\displaystyle K(t,u)=K(u,t)}. In the theory of integral equations, symmetric kernels correspond toself-adjoint operators.[1] There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform. There are many applications of probability that rely on integral transforms, such as "pricing kernel" orstochastic discount factor, or the smoothing of data recovered from robust statistics; seekernel (statistics). The precursor of the transforms were theFourier seriesto express functions in finite intervals. Later theFourier transformwas developed to remove the requirement of finite intervals. Using the Fourier series, just about any practical function of time (thevoltageacross the terminals of anelectronic devicefor example) can be represented as a sum ofsinesandcosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of anorthonormal basis. As an example of an application of integral transforms, consider theLaplace transform. This is a technique that mapsdifferentialorintegro-differential equationsin the"time" domaininto polynomial equations in what is termed the"complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary componentωof the complex frequencys= −σ+iωcorresponds to the usual concept of frequency,viz., the rate at which a sinusoid cycles, whereas the real componentσof the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond toeigenvaluesin the time domain), leading to a "solution" formulated in the frequency domain. Employing theinverse transform,i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond topower seriesin the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain. The Laplace transform finds wide application in physics and particularly in electrical engineering, where thecharacteristic equationsthat describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifteddamped sinusoidsin the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines. Another usage example is the kernel in thepath integral: This states that the total amplitudeψ(x,t){\displaystyle \psi (x,t)}to arrive at(x,t){\displaystyle (x,t)}is the sum (the integral) over all possible valuesx′{\displaystyle x'}of the total amplitudeψ(x′,t′){\displaystyle \psi (x',t')}to arrive at the point(x′,t′){\displaystyle (x',t')}multiplied by the amplitude to go fromx′{\displaystyle x'}tox{\displaystyle x}[i.e.K(x,t;x′,t′){\displaystyle K(x,t;x',t')}].[2]It is often referred to as thepropagatorfor a given system. This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel.[3] In the limits of integration for the inverse transform,cis a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform,cmust be greater than the largest real part of the zeroes of the transform function. Note that there are alternative notations and conventions for the Fourier transform. Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group. Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is alinear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be ageneralized functionthen all linear operators are integral transforms (a properly formulated version of this statement is theSchwartz kernel theorem). The general theory of suchintegral equationsis known asFredholm theory. In this theory, the kernel is understood to be acompact operatoracting on aBanach spaceof functions. Depending on the situation, the kernel is then variously referred to as theFredholm operator, thenuclear operatoror theFredholm kernel.
https://en.wikipedia.org/wiki/Integral_transform
In science, engineering, and other quantitative disciplines,order of approximationrefers to formal or informal expressions for how accurate an approximation is. In formal expressions, theordinal numberused before the wordorderrefers to the highestpowerin theseries expansionused in theapproximation. The expressions: azeroth-orderapproximation, afirst-orderapproximation, asecond-orderapproximation, and so forth are used asfixed phrases. The expression azero-order approximationis also common.Cardinal numeralsare occasionally used in expressions like anorder-zero approximation, anorder-one approximation, etc. The omission of the wordorderleads tophrasesthat have less formal meaning. Phrases likefirst approximationorto a first approximationmay refer toa roughly approximate value of a quantity.[1][2]The phraseto a zeroth approximationindicatesa wild guess.[3]The expressionorder of approximationis sometimes informally used to mean the number ofsignificant figures, in increasing order of accuracy, or to theorder of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives. The choice of series expansion depends on thescientific methodused to investigate aphenomenon. The expressionorder of approximationis expected to indicate progressively more refined approximations of afunctionin a specifiedinterval. The choice of order of approximation depends on theresearch purpose. One may wish to simplify a knownanalytic expressionto devise a new application or, on the contrary, try tofit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-orderTaylor serieswill not increase the accuracy. In the case of asmooth function, thenth-order approximation is apolynomialofdegreen, which is obtained by truncating the Taylor series to this degree. The formal usage oforder of approximationcorresponds to the omission of some terms of theseriesused in theexpansion. This affectsaccuracy. The error usually varies within the interval. Thus the terms (zeroth,first,second,etc.) used above meaning do not directly give information aboutpercent errororsignificant figures. For example, in theTaylor seriesexpansion of theexponential function,ex=1⏟0th+x⏟1st+x22!⏟2nd+x33!⏟3rd+x44!⏟4th+…,{\displaystyle e^{x}=\underbrace {1} _{0^{\text{th}}}+\underbrace {x} _{1^{\text{st}}}+\underbrace {\frac {x^{2}}{2!}} _{2^{\text{nd}}}+\underbrace {\frac {x^{3}}{3!}} _{3^{\text{rd}}}+\underbrace {\frac {x^{4}}{4!}} _{4^{\text{th}}}+\ldots \;,}the zeroth-order term is1;{\displaystyle 1;}the first-order term isx,{\displaystyle x,}second-order isx2/2,{\displaystyle x^{2}/2,}and so forth. If|x|<1,{\displaystyle |x|<1,}each higher order term is smaller than the previous. If|x|<<1,{\displaystyle |x|<<1,\,}then the first order approximation,ex≈1+x,{\displaystyle e^{x}\approx 1+x,}is often sufficient. But atx=1,{\displaystyle x=1,}the first-order term,x,{\displaystyle x,}is not smaller than the zeroth-order term,1.{\displaystyle 1.}And atx=2,{\displaystyle x=2,}even the second-order term,23/3!=4/3,{\displaystyle 2^{3}/3!=4/3,\,}is greater than the zeroth-order term. Zeroth-order approximationis the termscientistsuse for a first rough answer. Manysimplifying assumptionsare made, and when a number is needed, an order-of-magnitude answer (or zerosignificant figures) is often given. For example, "the town hasa few thousandresidents", when it has 3,914 people in actuality. This is also sometimes referred to as anorder-of-magnitudeapproximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined. A zeroth-order approximation of afunction(that is,mathematicallydetermining aformulato fit multipledata points) will beconstant, or a flatlinewith noslope: a polynomial of degree 0. For example, could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging thexvalues and theyvalues. However, data points representresults of measurementsand they do differ frompoints in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example offalse precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result foryof ~3.7 ± 2.0 in the interval ofxfrom −0.5 to 2.5, considering thestandard deviation. If the data points are reported as the zeroth-order approximation results in The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example, One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of theinterval, which may be a large part of it. This means thatycould have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for exampley= sin πx.Taylor seriesare useful and help predictanalytic solutions, but the approximations alone do not provide conclusive evidence. First-order approximationis the term scientists use for a slightly better answer.[3]Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has4×103, orfour thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given. A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example: is an approximate fit to the data. In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess". Second-order approximationis the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has3.9×103, orthirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above. A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be aquadratic polynomial, geometrically, aparabola: a polynomial of degree 2. For example: is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order"). While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number. Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. Seepolynomial interpolation. These terms are also usedcolloquiallyby scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement.
https://en.wikipedia.org/wiki/Order_of_approximation
Innumerical analysis,order of accuracyquantifies therate of convergenceof a numerical approximation of adifferential equationto the exact solution. Consideru{\displaystyle u}, the exact solution to a differential equation in an appropriatenormed space(V,||||){\displaystyle (V,||\ ||)}. Consider a numerical approximationuh{\displaystyle u_{h}}, whereh{\displaystyle h}is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in afinite element method. The numerical solutionuh{\displaystyle u_{h}}is said to ben{\displaystyle n}th-order accurateif the errorE(h):=||u−uh||{\displaystyle E(h):=||u-u_{h}||}is proportional to the step-sizeh{\displaystyle h}to then{\displaystyle n}th power:[1] where the constantC{\displaystyle C}is independent ofh{\displaystyle h}and usually depends on the solutionu{\displaystyle u}.[2]Using thebig O notationann{\displaystyle n}th-order accurate numerical method is notated as This definition is strictly dependent on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly. The size of the error of a first-order accurate approximation is directly proportional toh{\displaystyle h}.Partial differential equationswhich vary over both time and space are said to be accurate to ordern{\displaystyle n}in time and to orderm{\displaystyle m}in space.[3] Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Order_of_accuracy
TheArrow information paradox(information paradoxfor short, or AIP[1]), and occasionally referred to asArrow's disclosure paradox, named afterKenneth Arrow, American economist and joint winner of theNobel Memorial Prize in EconomicswithJohn Hicks,[2]is a problem faced by companies when managingintellectual propertyacross their boundaries. It occurs when they seek external technologies for their business or external markets for their own technologies. It has implications for the value of technology and innovations as well as their development by more than one firm, and for the need for and limitations of patent protection. Arrow's information paradox theory was set out in a 1962 article by K. J. Arrow.[3]Cornell Law School professor Oskar Liivak has written in a paper for a conference at Stanford University that Arrow's article "has been one of the foundational theoretical pillars of the incentive based theory of patents as Arrow’s work is thought to rule out a strictly market-based solution".[4] A fundamental tenet of the paradox is that thecustomer, i.e. the potential purchaser of the information describing a technology (or other information having some value, such as facts), wants to know the technology and what it does in sufficient detail as to understand its capabilities or have information about the facts or products to decide whether or not to buy it.[5][6]Once the customer has this detailed knowledge, however, the seller has in effect transferred the technology to the customer without anycompensation.[2]This has been argued to show the need for patent protection.[4] If the buyer trusts the seller or is protected via contract, then they only need to know the results that the technology will provide, along with any caveats for its usage in a given context.[7]A problem is that sellers lie, they may be mistaken, one or both sides overlook side consequences for usage in a given context, or someunknown unknownaffects the actual outcome. Discussions of the value of patent rights have taken Arrow's information paradox into account in their evaluations.[8]The theory has been the basis for many later economic studies.[9]These include theories that pre-patent innovation can be carried out only by a single firm.[10][11]
https://en.wikipedia.org/wiki/Arrow_information_paradox
Acryptographic protocolis an abstract or concreteprotocolthat performs asecurity-related function and appliescryptographicmethods, often as sequences ofcryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.[1] Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects: For example,Transport Layer Security(TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections.[2]It has an entity authentication mechanism, based on theX.509system; a key setup phase, where asymmetric encryptionkey is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support. There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographicapplicationprotocols often use one or more underlyingkey agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as theDiffie–Hellman key exchange, which although it is only a part of TLSper se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications. A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration.[3]Blind signaturescan be used fordigital cashanddigital credentialsto prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with.Secure digital timestampingcan be used to prove that data (even if confidential) existed at a certain time.Secure multiparty computationcan be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer.End-to-end auditable voting systemsprovide sets of desirable privacy and auditability properties for conductinge-voting.Undeniable signaturesinclude interactive protocols that allow the signer to prove a forgery and limit who can verify the signature.Deniable encryptionaugments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message.Digital mixescreate hard-to-trace communications. Cryptographic protocols can sometimes beverified formallyon an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through theDolev-Yaomodel. Logics, concepts and calculi used for formal reasoning of security protocols: Research projects and tools used for formal verification of security protocols: To formally verify a protocol it is often abstracted and modelled usingAlice & Bob notation. A simple example is the following: This states thatAliceA{\displaystyle A}intends a message for BobB{\displaystyle B}consisting of a messageX{\displaystyle X}encrypted under shared keyKA,B{\displaystyle K_{A,B}}.
https://en.wikipedia.org/wiki/Cryptographic_protocol
Incryptography, theFeige–Fiat–Shamir identification schemeis a type of parallelzero-knowledge proofdeveloped byUriel Feige,Amos Fiat, andAdi Shamirin 1988. Like all zero-knowledge proofs, it allows one party, the Prover, to prove to another party, the Verifier, that they possess secret information without revealing to Verifier what that secret information is. The Feige–Fiat–Shamir identification scheme, however, usesmodular arithmeticand a parallel verification process that limits the number of communications between Prover and Verifier. Following acommon convention, call the prover Peggy and the verifier Victor. Choose two large prime integerspandqand compute the productn = pq. Create secret numberss1,⋯,sk{\displaystyle s_{1},\cdots ,s_{k}}coprimeton. Computevi≡si2(modn){\displaystyle v_{i}\equiv s_{i}^{2}{\pmod {n}}}. Peggy and Victor both receiven{\displaystyle n}whilep{\displaystyle p}andq{\displaystyle q}are kept secret. Peggy is then sent the numberssi{\displaystyle s_{i}}. These are her secret login numbers. Victor is sent the numbersvi{\displaystyle v_{i}}by Peggy when she wishes to identify herself to Victor. Victor is unable to recover Peggy'ssi{\displaystyle s_{i}}numbers from hisvi{\displaystyle v_{i}}numbers due to the difficulty in determining amodular square rootwhen the modulus' factorization is unknown. This procedure is repeated with differentr{\displaystyle r}andai{\displaystyle a_{i}}values until Victor is satisfied that Peggy does indeed possess the modular square roots (si{\displaystyle s_{i}}) of hisvi{\displaystyle v_{i}}numbers. In the procedure, Peggy does not give any useful information to Victor. She merely proves to Victor that she has the secret numbers without revealing what those numbers are. Suppose Eve has intercepted Victor'svi{\displaystyle v_{i}}numbers but does not know what Peggy'ssi{\displaystyle s_{i}}numbers are. If Eve wants to try to convince Victor that she is Peggy, she would have to correctly guess what Victor'sai{\displaystyle a_{i}}numbers will be. She then picks a randomy{\displaystyle y}, calculatesx≡y2v1−a1v2−a2⋯vk−ak(modn){\displaystyle x\equiv y^{2}v_{1}^{-a_{1}}v_{2}^{-a_{2}}\cdots v_{k}^{-a_{k}}{\pmod {n}}}and sendsx{\displaystyle x}to Victor. When Victor sendsai{\displaystyle a_{i}}, Eve simply returns hery{\displaystyle y}. Victor is satisfied and concludes that Eve has the secret numbers. However, the probability of Eve correctly guessing what Victor'sai{\displaystyle a_{i}}will be is 1 in2k{\displaystyle 2^{k}}. By repeating the proceduret{\displaystyle t}times, the probability drops to 1 in2kt{\displaystyle 2^{kt}}. Fork=5{\displaystyle k=5}andt=4{\displaystyle t=4}the probability of successfully posing as Peggy is less than 1 in 1 million.
https://en.wikipedia.org/wiki/Feige%E2%80%93Fiat%E2%80%93Shamir_identification_scheme
Incomputational complexity theory, aprobabilistically checkable proof(PCP) is a type ofproofthat can be checked by arandomized algorithmusing a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (orcertificate), as used in theverifier-based definition of thecomplexity classNP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The classPCP[r(n),q(n)]refers to the set ofdecision problemsthat have probabilistically checkable proofs that can be verified in polynomial time using at mostr(n) random bits and by reading at mostq(n) bits of the proof.[1]Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. ThePCP theorem, a major result in computational complexity theory, states thatPCP[O(logn),O(1)] =NP. Given adecision problemL(or alanguageL with its alphabet set Σ), aprobabilistically checkable proof systemforLwith completenessc(n) and soundnesss(n), where0 ≤s(n) ≤c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proofπwhich statesxsolvesL(x∈L, the proof is a string∈ Σ∗). And the verifier is a randomizedoracle Turing MachineV(theverifier) that checks the proofπfor the statement thatxsolvesL(orx∈L) and decides whether to accept the statement. The system has the following properties: For thecomputational complexityof the verifier, the verifier is polynomial time, and we have therandomness complexityr(n) to measure the maximum number of random bits thatVuses over allxof lengthnand thequery complexityq(n) of the verifier is the maximum number of queries thatVmakes to π over allxof lengthn. In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language. The verifier is said to benon-adaptiveif it makes all its queries before it receives any of the answers to previous queries. The complexity classPCPc(n),s(n)[r(n),q(n)]is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completenessc(n) and soundnesss(n), where the verifier is non-adaptive, runs in polynomial time, and it has randomness complexityr(n) and query complexityq(n). The shorthand notationPCP[r(n),q(n)]is sometimes used forPCP1, 1/2[r(n),q(n)]. The complexity classPCPis defined asPCP1, 1/2[O(logn),O(1)]. The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications tocomputational complexity(in particularhardness of approximation) andcryptography. The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992,[2]although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved thatPCP[poly(n), poly(n)] =NEXP, providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs.[3]ThePCP theoremproved in 1992 states thatPCP[O(logn),O(1)] =NP.[2][4] The theory ofhardness of approximationrequires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs. From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standardcomplexity classes. For example, we have the following for different setting ofPCP[r(n),q(n)]: The PCP theorem andMIP= NEXP can be characterized as follows: It is also known thatPCP[r(n),q(n)] ⊆NTIME(poly(n,2O(r(n))q(n))). In particular,PCP[O(logn), poly(n)] =NP. On the other hand, ifNP⊆PCP[o(logn),o(logn)]thenP = NP.[2] A Linear PCP is a PCP in which the proof is a vector of elements of a finite fieldπ∈Fn{\displaystyle \pi \in \mathbb {F} ^{n}}, and such that the PCP oracle is only allowed to do linear operations on the proof. Namely, the response from the oracle to a verifier queryq∈Fn{\displaystyle q\in \mathbb {F} ^{n}}is a linear functionf(q,π){\displaystyle f(q,\pi )}. Linear PCPs have important applications in proof systems that can be compiled into SNARKs.
https://en.wikipedia.org/wiki/Probabilistically_checkable_proof
Incryptography, aproof of knowledgeis aninteractive proofin which the prover succeeds in 'convincing' a verifier that the prover knows something. What it means for amachineto 'know something' is defined in terms of computation. A machine 'knows something', if this something can be computed, given the machine as an input. As the program of the prover does not necessarily spit out the knowledge itself (as is the case forzero-knowledge proofs[1]), a machine with a different program, called the knowledge extractor is introduced to capture this idea. We are mostly interested in what can be proven bypolynomial timebounded machines. In this case, the set of knowledge elements is limited to a set of witnesses of somelanguageinNP. Letx{\displaystyle x}be a statement of languageL{\displaystyle L}in NP, andW(x){\displaystyle W(x)}the set of witnesses for x that should be accepted in the proof. This allows us to define the following relation:R={(x,w):x∈L,w∈W(x)}{\displaystyle R=\{(x,w):x\in L,w\in W(x)\}}. A proof of knowledge for relationR{\displaystyle R}with knowledge errorκ{\displaystyle \kappa }is a two party protocol with a proverP{\displaystyle P}and a verifierV{\displaystyle V}with the following two properties: This is a more rigorous definition ofValidity:[2] LetR{\displaystyle R}be a witness relation,W(x){\displaystyle W(x)}the set of all witnesses for public valuex{\displaystyle x}, andκ{\displaystyle \kappa }the knowledge error. A proof of knowledge isκ{\displaystyle \kappa }-valid if there exists a polynomial-time machineE{\displaystyle E}, given oracle access toP~{\displaystyle {\tilde {P}}}, such that for everyP~{\displaystyle {\tilde {P}}}, it is the case thatEP~(x)(x)∈W(x)∪{⊥}{\displaystyle E^{{\tilde {P}}(x)}(x)\in W(x)\cup \{\bot \}}andPr(EP~(x)(x)∈W(x))≥Pr(P~(x)↔V(x)→1)−κ(x).{\displaystyle \Pr(E^{{\tilde {P}}(x)}(x)\in W(x))\geq \Pr({\tilde {P}}(x)\leftrightarrow V(x)\rightarrow 1)-\kappa (x).} The result⊥{\displaystyle \bot }signifies that the Turing machineE{\displaystyle E}did not come to a conclusion. The knowledge errorκ(x){\displaystyle \kappa (x)}denotes the probability that the verifierV{\displaystyle V}might acceptx{\displaystyle x}, even though the prover does in fact not know a witnessw{\displaystyle w}. The knowledge extractorE{\displaystyle E}is used to express what is meant by the knowledge of aTuring machine. IfE{\displaystyle E}can extractw{\displaystyle w}fromP~{\displaystyle {\tilde {P}}}, we say thatP~{\displaystyle {\tilde {P}}}knows the value ofw{\displaystyle w}. This definition of the validity property is a combination of the validity and strong validity properties.[2]For small knowledge errorsκ(x){\displaystyle \kappa (x)}, such as e.g.2−80{\displaystyle 2^{-80}}or1/poly(|x|){\displaystyle 1/\mathrm {poly} (|x|)}, it can be seen as being stronger than thesoundnessof ordinaryinteractive proofs. In order to define a specific proof of knowledge, one need not only define the language, but also the witnesses the verifier should know. In some cases proving membership in a language may be easy, while computing a specific witness may be hard. This is best explained using an example: Let⟨g⟩{\displaystyle \langle g\rangle }be acyclic groupwith generatorg{\displaystyle g}in which solving thediscrete logarithmproblem is believed to be hard. Deciding membership of the languageL={x∣gw=x}{\displaystyle L=\{x\mid g^{w}=x\}}is trivial, as everyx{\displaystyle x}is in⟨g⟩{\displaystyle \langle g\rangle }. However, finding the witnessw{\displaystyle w}such thatgw=x{\displaystyle g^{w}=x}holds corresponds to solving the discrete logarithm problem. One of the simplest and frequently used proofs of knowledge, theproof of knowledge of adiscrete logarithm, is due to Schnorr.[3]The protocol is defined for acyclic groupGq{\displaystyle G_{q}}of orderq{\displaystyle q}with generatorg{\displaystyle g}. In order to prove knowledge ofx=logg⁡y{\displaystyle x=\log _{g}y}, the prover interacts with the verifier as follows: The verifier accepts, ifgs=tyc{\displaystyle g^{s}=ty^{c}}. We can see this is a valid proof of knowledge because it has an extractor that works as follows: Since(s1−s2)=(r+c1x)−(r+c2x)=x(c1−c2){\displaystyle (s_{1}-s_{2})=(r+c_{1}x)-(r+c_{2}x)=x(c_{1}-c_{2})}, the output of the extractor is preciselyx{\displaystyle x}. This protocol happens to bezero-knowledge, though that property is not required for a proof of knowledge. Protocols which have the above three-move structure (commitment, challenge and response) are calledsigma protocols.[4]The naming originates from Sig, referring to the zig-zag symbolizing the three moves of the protocol, and MA, an abbreviation of "Merlin-Arthur".[5]Sigma protocols exist for proving various statements, such as those pertaining to discrete logarithms. Using these proofs, the prover can not only prove the knowledge of the discrete logarithm, but also that the discrete logarithm is of a specific form. For instance, it is possible to prove that two logarithms ofy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}with respect to basesg1{\displaystyle g_{1}}andg2{\displaystyle g_{2}}are equal or fulfill some otherlinearrelation. Foraandbelements ofZq{\displaystyle Z_{q}}, we say that the prover proves knowledge ofx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}such thaty1=g1x1∧y2=g2x2{\displaystyle y_{1}=g_{1}^{x_{1}}\land y_{2}=g_{2}^{x_{2}}}andx2=ax1+b{\displaystyle x_{2}=ax_{1}+b}. Equality corresponds to the special case wherea= 1 andb= 0. Asx2{\displaystyle x_{2}}can betriviallycomputed fromx1{\displaystyle x_{1}}this is equivalent to proving knowledge of anxsuch thaty1=g1x∧y2=(g2a)xg2b{\displaystyle y_{1}=g_{1}^{x}\land y_{2}={(g_{2}^{a})}^{x}g_{2}^{b}}. This is the intuition behind the following notation,[6]which is commonly used to express what exactly is proven by a proof of knowledge. states that the prover knows anxthat fulfills the relation above. Proofs of knowledge are useful tool for the construction of identification protocols, and in their non-interactive variant, signature schemes. Such schemes are: They are also used in the construction ofgroup signatureandanonymous digital credentialsystems.
https://en.wikipedia.org/wiki/Proof_of_knowledge
Awitness-indistinguishable proof(WIP) is a variant of azero-knowledge prooffor languages inNP. In a typical zero-knowledge proof of a statement, the prover will use awitnessfor the statement as input to the protocol, and the verifier will learn nothing other than the truth of the statement. In a WIP, this zero-knowledge condition is weakened, and the only guarantee is that the verifier will not be able to distinguish between provers that use different witnesses. In particular, the protocol may leak information about the set of all witnesses, or even leak the witness that was used when there is only one possible witness. Witness-indistinguishable proof systems were first introduced by Feige and Shamir.[1]Unlike zero-knowledge proofs, they remain secure when multiple proofs are being performed in parallel. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Witness-indistinguishable_proof
Incryptography, azero-knowledge password proof(ZKPP) is a type ofzero-knowledge proofthat allows one party (the prover) to prove to another party (the verifier) that it knows a value of apassword, without revealing anything other than the fact that it knows the password to the verifier. The term is defined inIEEE P1363.2, in reference to one of the benefits of using apassword-authenticated key exchange(PAKE) protocol that is secure against off-line dictionary attacks. A ZKPP prevents any party from verifying guesses for the password without interacting with a party that knows it and, in the optimal case, provides exactly one guess in each interaction.[citation needed] A common use of a zero-knowledge password proof is inauthenticationsystems where one party wants to prove its identity to a second party using a password but doesn't want the second party or anybody else to learn anything about the password. For example, apps can validate a password without processing it and a payment app can check the balance of an account without touching or learning anything about the amount.[1] The first methods to demonstrate a ZKPP were theencrypted key exchangemethods (EKE) described bySteven M. Bellovinand Michael Merritt in 1992.[2]A considerable number of refinements, alternatives, and variations in the growing class of password-authenticated key agreement methods were developed in subsequent years. Standards for these methods include IETFRFC2945,IEEE P1363.2, and ISO-IEC 11770-4.[3] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Zero-knowledge_password_proof
Non-interactivezero-knowledge proofsarecryptographic primitives, where information between a prover and a verifier can be authenticated by the prover, without revealing any of the specific information beyond the validity of the statement itself. This makes direct communication between the prover and verifier unnecessary, effectively removing any intermediaries. The key advantage of non-interactivezero-knowledge proofsis that they can be used in situations where there is no possibility of interaction between the prover and verifier, such as in online transactions where the two parties are not able to communicate in real time. This makes non-interactive zero-knowledge proofs particularly useful in decentralized systems likeblockchains, where transactions are verified by a network ofnodesand there is no central authority to oversee the verification process.[1] Most non-interactive zero-knowledge proofs are based on mathematical constructs likeelliptic curve cryptographyorpairing-based cryptography, which allow for the creation of short and easily verifiable proofs of the truth of a statement. Unlike interactive zero-knowledge proofs, which require multiple rounds of interaction between the prover and verifier, non-interactive zero-knowledge proofs are designed to be efficient and can be used to verify a large number of statements simultaneously.[1] Blum, Feldman, andMicali[2]showed in 1988 that a common reference string shared between the prover and the verifier is sufficient to achieve computational zero-knowledge without requiring interaction.Goldreichand Oren[3]gave impossibility results[clarification needed]for one shot zero-knowledge protocols in thestandard model. In 2003,Shafi GoldwasserandYael Tauman Kalaipublished an instance of an identification scheme for which any hash function will yield an insecure digital signature scheme.[4] The model influences the properties that can be obtained from a zero-knowledge protocol. Pass[5]showed that in the common reference string model non-interactive zero-knowledge protocols do not preserve all of the properties of interactive zero-knowledge protocols; e.g., they do not preserve deniability. Non-interactive zero-knowledge proofs can also be obtained in therandom oracle modelusing theFiat–Shamir heuristic.[citation needed] In 2012,Alessandro Chiesaet al developed the zk-SNARK protocol, an acronym forzero-knowledgesuccinct non-interactiveargument of knowledge.[6]The first widespread application of zk-SNARKs was in theZerocashblockchainprotocol, where zero-knowledge cryptography provides the computational backbone, by facilitating mathematical proofs that one party has possession of certain information without revealing what that information is.[7]Zcash utilized zk-SNARKs to facilitate four distinct transaction types: private, shielding, deshielding, and public. This protocol allowed users to determine how much data was shared with the public ledger for each transaction.[8]Ethereumzk-Rollups also utilize zk-SNARKs to increase scalability.[9] In 2017,Bulletproofs[10]was released, which enable proving that a committed value is in a range using a logarithmic (in the bit length of the range) number of field and group elements.[11]Bulletproofs was later implemented intoMimblewimbleprotocol (the basis for Grin and Beam, andLitecoinvia extension blocks) andMonero cryptocurrency.[12] In 2018, thezk-STARK(zero-knowledgeScalable TransparentArgument of Knowledge)[13]protocol was introduced by Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev,[14]offering transparency (no trusted setup), quasi-linear proving time, and poly-logarithmic verification time.Zero-Knowledge Succinct Transparent Arguments of Knowledgeare a type of cryptographic proof system that enables one party (the prover) to prove to another party (the verifier) that a certain statement is true, without revealing any additional information beyond the truth of the statement itself. zk-STARKs are succinct, meaning that they allow for the creation of short proofs that are easy to verify, and they are transparent, meaning that anyone can verify the proof without needing any secret information.[14] Unlike the first generation of zk-SNARKs, zk-STARKs, by default, do not require a trusted setup, which makes them particularly useful for decentralized applications like blockchains. Additionally, zk-STARKs can be used to verify many statements at once, making them scalable and efficient.[1] In 2019, HALO recursive zk-SNARKs without a trusted setup were presented.[15]Pickles[16]zk-SNARKs, based on the former construction, power Mina, the first succinctly verifiable blockchain.[17] A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based on transparency, universality, and plausible post-quantum security. A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms. Originally,[2]non-interactive zero-knowledge was only defined as a single theorem-proof system. In such a system each proof requires its own fresh common reference string. A common reference string in general is not a random string. It may, for instance, consist of randomly chosen group elements that all protocol parties use. Although the group elements are random, the reference string is not as it contains a certain structure (e.g., group elements) that is distinguishable from randomness. Subsequently, Feige, Lapidot, andShamir[37]introduced multi-theorem zero-knowledge proofs as a more versatile notion for non-interactive zero-knowledge proofs. Pairing-based cryptographyhas led to several cryptographic advancements. One of these advancements is more powerful and more efficient non-interactive zero-knowledge proofs. The seminal idea was to hide the values for the pairing evaluation in acommitment. Using different commitment schemes, this idea was used to build zero-knowledge proof systems under thesub-group hiding[38]and under thedecisional linear assumption.[39]These proof systems provecircuit satisfiability, and thus by theCook–Levin theoremallow proving membership for every language in NP. The size of the common reference string and the proofs is relatively small; however, transforming a statement into a boolean circuit incurs considerable overhead. Proof systems under thesub-group hiding,decisional linear assumption, andexternal Diffie–Hellman assumptionthat allow directly proving the pairing product equations that are common inpairing-based cryptographyhave been proposed.[40] Under strongknowledge assumptions, it is known how to create sublinear-length computationally-sound proof systems forNP-completelanguages. More precisely, the proof in such proof systems consists only of a small number ofbilinear groupelements.[41][42]
https://en.wikipedia.org/wiki/Non-interactive_zero-knowledge_proof
Inprobability theoryandstatistics, acopulais a multivariatecumulative distribution functionfor which themarginal probabilitydistribution of each variable isuniformon the interval [0, 1]. Copulas are used to describe/model thedependence(inter-correlation) betweenrandom variables.[1]Their name, introduced by applied mathematicianAbe Sklarin 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammaticalcopulasinlinguistics. Copulas have been used widely inquantitative financeto model and minimize tail risk[2]andportfolio-optimizationapplications.[3] Sklar's theorem states that any multivariatejoint distributioncan be written in terms of univariatemarginal distributionfunctions and a copula which describes the dependence structure between the variables. Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulas separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below. Two-dimensional copulas are known in some other areas of mathematics under the namepermutonsanddoubly-stochastic measures. Consider a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}. Suppose its marginals are continuous, i.e. the marginalCDFsFi(x)=Pr[Xi≤x]{\displaystyle F_{i}(x)=\Pr[X_{i}\leq x]}arecontinuous functions. By applying theprobability integral transformto each component, the random vector has marginals that areuniformly distributedon the interval [0, 1]. The copula of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}is defined as thejoint cumulative distribution functionof(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}: The copulaCcontains all information on the dependence structure between the components of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}whereas the marginal cumulative distribution functionsFi{\displaystyle F_{i}}contain all information on the marginal distributions ofXi{\displaystyle X_{i}}. The reverse of these steps can be used to generatepseudo-randomsamples from general classes ofmultivariate probability distributions. That is, given a procedure to generate a sample(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}from the copula function, the required sample can be constructed as The generalized inversesFi−1{\displaystyle F_{i}^{-1}}are unproblematicalmost surely, since theFi{\displaystyle F_{i}}were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as: Inprobabilisticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaifCis a jointcumulative distribution functionof ad-dimensional random vector on theunit cube[0,1]d{\displaystyle [0,1]^{d}}withuniformmarginals.[4] Inanalyticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaif For instance, in the bivariate case,C:[0,1]×[0,1]→[0,1]{\displaystyle C:[0,1]\times [0,1]\rightarrow [0,1]}is a bivariate copula ifC(0,u)=C(u,0)=0{\displaystyle C(0,u)=C(u,0)=0},C(1,u)=C(u,1)=u{\displaystyle C(1,u)=C(u,1)=u}andC(u2,v2)−C(u2,v1)−C(u1,v2)+C(u1,v1)≥0{\displaystyle C(u_{2},v_{2})-C(u_{2},v_{1})-C(u_{1},v_{2})+C(u_{1},v_{1})\geq 0}for all0≤u1≤u2≤1{\displaystyle 0\leq u_{1}\leq u_{2}\leq 1}and0≤v1≤v2≤1{\displaystyle 0\leq v_{1}\leq v_{2}\leq 1}. Sklar's theorem, named afterAbe Sklar, provides the theoretical foundation for the application of copulas.[5][6]Sklar's theorem states that everymultivariate cumulative distribution function of a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}can be expressed in terms of its marginalsFi(xi)=Pr[Xi≤xi]{\displaystyle F_{i}(x_{i})=\Pr[X_{i}\leq x_{i}]}and a copulaC{\displaystyle C}. Indeed: If the multivariate distribution has a densityh{\displaystyle h}, and if this density is available, it also holds that wherec{\displaystyle c}is the density of the copula. The theorem also states that, givenH{\displaystyle H}, the copula is unique onRan⁡(F1)×⋯×Ran⁡(Fd){\displaystyle \operatorname {Ran} (F_{1})\times \cdots \times \operatorname {Ran} (F_{d})}which is thecartesian productof therangesof the marginal cdf's. This implies that the copula is unique if the marginalsFi{\displaystyle F_{i}}are continuous. The converse is also true: given a copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and marginalsFi(x){\displaystyle F_{i}(x)}thenC(F1(x1),…,Fd(xd)){\displaystyle C\left(F_{1}(x_{1}),\dots ,F_{d}(x_{d})\right)}defines ad-dimensional cumulative distribution function with marginal distributionsFi(x){\displaystyle F_{i}(x)}. Copulas mainly work when time series arestationary[7]and continuous.[8]Thus, a very important pre-processing step is to check for theauto-correlation,trendandseasonalitywithin time series. When time series are auto-correlated, they may generate a non existing dependence between sets of variables and result in incorrect copula dependence structure.[9] The Fréchet–Hoeffding theorem (afterMaurice René FréchetandWassily Hoeffding[10]) states that for any copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and any(u1,…,ud)∈[0,1]d{\displaystyle (u_{1},\dots ,u_{d})\in [0,1]^{d}}the following bounds hold: The functionWis called lower Fréchet–Hoeffding bound and is defined as The functionMis called upper Fréchet–Hoeffding bound and is defined as The upper bound is sharp:Mis always a copula, it corresponds tocomonotone random variables. The lower bound is point-wise sharp, in the sense that for fixedu, there is a copulaC~{\displaystyle {\tilde {C}}}such thatC~(u)=W(u){\displaystyle {\tilde {C}}(u)=W(u)}. However,Wis a copula only in two dimensions, in which case it corresponds to countermonotonic random variables. In two dimensions, i.e. the bivariate case, the Fréchet–Hoeffding theorem states Several families of copulas have been described. The Gaussian copula is a distribution over the unithypercube[0,1]d{\displaystyle [0,1]^{d}}. It is constructed from amultivariate normal distributionoverRd{\displaystyle \mathbb {R} ^{d}}by using theprobability integral transform. For a givencorrelation matrixR∈[−1,1]d×d{\displaystyle R\in [-1,1]^{d\times d}}, the Gaussian copula with parameter matrixR{\displaystyle R}can be written as whereΦ−1{\displaystyle \Phi ^{-1}}is the inverse cumulative distribution function of astandard normalandΦR{\displaystyle \Phi _{R}}is the joint cumulative distribution function of a multivariate normal distribution with mean vector zero and covariance matrix equal to the correlation matrixR{\displaystyle R}. While there is no simple analytical formula for the copula function,CRGauss(u){\displaystyle C_{R}^{\text{Gauss}}(u)}, it can be upper or lower bounded, and approximated using numerical integration.[11][12]The density can be written as[13] whereI{\displaystyle I}is the identity matrix. Archimedean copulas are an associative class of copulas. Most common Archimedean copulas admit an explicit formula, something not possible for instance for the Gaussian copula. In practice, Archimedean copulas are popular because they allow modeling dependence in arbitrarily high dimensions with only one parameter, governing the strength of dependence. A copulaCis called Archimedean if it admits the representation[14] whereψ:[0,1]×Θ→[0,∞){\displaystyle \psi \!:[0,1]\times \Theta \rightarrow [0,\infty )}is a continuous, strictly decreasing and convex function such thatψ(1;θ)=0{\displaystyle \psi (1;\theta )=0},θ{\displaystyle \theta }is a parameter within some parameter spaceΘ{\displaystyle \Theta }, andψ{\displaystyle \psi }is the so-called generator function andψ−1{\displaystyle \psi ^{-1}}is its pseudo-inverse defined by Moreover, the above formula forCyields a copula forψ−1{\displaystyle \psi ^{-1}}if and only ifψ−1{\displaystyle \psi ^{-1}}isd-monotoneon[0,∞){\displaystyle [0,\infty )}.[15]That is, if it isd−2{\displaystyle d-2}times differentiable and the derivatives satisfy for allt≥0{\displaystyle t\geq 0}andk=0,1,…,d−2{\displaystyle k=0,1,\dots ,d-2}and(−1)d−2ψ−1,(d−2)(t;θ){\displaystyle (-1)^{d-2}\psi ^{-1,(d-2)}(t;\theta )}is nonincreasing andconvex. The following tables highlight the most prominent bivariate Archimedean copulas, with their corresponding generator. Not all of them arecompletely monotone, i.e.d-monotone for alld∈N{\displaystyle d\in \mathbb {N} }ord-monotone for certainθ∈Θ{\displaystyle \theta \in \Theta }only. In statistical applications, many problems can be formulated in the following way. One is interested in the expectation of a response functiong:Rd→R{\displaystyle g:\mathbb {R} ^{d}\rightarrow \mathbb {R} }applied to some random vector(X1,…,Xd){\displaystyle (X_{1},\dots ,X_{d})}.[18]If we denote the CDF of this random vector withH{\displaystyle H}, the quantity of interest can thus be written as IfH{\displaystyle H}is given by a copula model, i.e., this expectation can be rewritten as In case the copulaCisabsolutely continuous, i.e.Chas a densityc, this equation can be written as and if each marginal distribution has the densityfi{\displaystyle f_{i}}it holds further that If copula and marginals are known (or if they have been estimated), this expectation can be approximated through the following Monte Carlo algorithm: When studying multivariate data, one might want to investigate the underlying copula. Suppose we have observations from a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}with continuous marginals. The corresponding “true” copula observations would be However, the marginal distribution functionsFi{\displaystyle F_{i}}are usually not known. Therefore, one can construct pseudo copula observations by using the empirical distribution functions instead. Then, the pseudo copula observations are defined as The corresponding empirical copula is then defined as The components of the pseudo copula samples can also be written asU~ki=Rki/n{\displaystyle {\tilde {U}}_{k}^{i}=R_{k}^{i}/n}, whereRki{\displaystyle R_{k}^{i}}is the rank of the observationXki{\displaystyle X_{k}^{i}}: Therefore, the empirical copula can be seen as the empirical distribution of the rank transformed data. The sample version of Spearman's rho:[19] Inquantitative financecopulas are applied torisk management, toportfolio managementandoptimization, and toderivatives pricing. For the former, copulas are used to performstress-testsand robustness checks that are especially important during "downside/crisis/panic regimes" where extreme downside events may occur (e.g., the2008 financial crisis). The formula was also adapted for financial markets and was used to estimate theprobability distributionof losses onpools of loans or bonds. During a downside regime, a large number of investors who have held positions in riskier assets such as equities or real estate may seek refuge in 'safer' investments such as cash or bonds. This is also known as aflight-to-qualityeffect and investors tend to exit their positions in riskier assets in large numbers in a short period of time. As a result, during downside regimes, correlations across equities are greater on the downside as opposed to the upside and this may have disastrous effects on the economy.[22][23]For example, anecdotally, we often read financial news headlines reporting the loss of hundreds of millions of dollars on the stock exchange in a single day; however, we rarely read reports of positive stock market gains of the same magnitude and in the same short time frame. Copulas aid in analyzing the effects of downside regimes by allowing the modelling of themarginalsand dependence structure of a multivariate probability model separately. For example, consider the stock exchange as a market consisting of a large number of traders each operating with his/her own strategies to maximize profits. The individualistic behaviour of each trader can be described by modelling the marginals. However, as all traders operate on the same exchange, each trader's actions have an interaction effect with other traders'. This interaction effect can be described by modelling the dependence structure. Therefore, copulas allow us to analyse the interaction effects which are of particular interest during downside regimes as investors tend toherd their trading behaviour and decisions. (See alsoagent-based computational economics, where price is treated as anemergent phenomenon, resulting from the interaction of the various market participants, or agents.) The users of the formula have been criticized for creating "evaluation cultures" that continued to use simple copulæ despite the simple versions being acknowledged as inadequate for that purpose.[24][25]Thus, previously, scalable copula models for large dimensions only allowed the modelling of elliptical dependence structures (i.e., Gaussian and Student-t copulas) that do not allow for correlation asymmetries where correlations differ on the upside or downside regimes. However, the development ofvine copulas[26](also known as pair copulas) enables the flexible modelling of the dependence structure for portfolios of large dimensions.[27]The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied inportfolio optimizationand risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula.[28] Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects ofpanic regimeson the portfolio profit and loss distribution. Panic copulas are created byMonte Carlo simulation, mixed with a re-weighting of the probability of each scenario.[29] As regardsderivatives pricing, dependence modelling with copula functions is widely used in applications offinancial risk assessmentandactuarial analysis– for example in the pricing ofcollateralized debt obligations(CDOs).[30]Some believe the methodology of applying the Gaussian copula tocredit derivativesto be one of the causes of the2008 financial crisis;[31][32][33]seeDavid X. Li § CDOs and Gaussian copula. Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix.[28]This methodology is limited such that it does not allow for dependence to evolve as the financial markets exhibit asymmetric dependence, whereby correlations across assets significantly increase during downturns compared to upturns. Therefore, modeling approaches using the Gaussian copula exhibit a poor representation ofextreme events.[28][34]There have been attempts to propose models rectifying some of the copula limitations.[34][35][36] Additional to CDOs, copulas have been applied to other asset classes as a flexible tool in analyzing multi-asset derivative products. The first such application outside credit was to use a copula to construct abasketimplied volatilitysurface,[37]taking into account thevolatility smileof basket components. Copulas have since gained popularity in pricing and risk management[38]of options on multi-assets in the presence of a volatility smile, inequity-,foreign exchange-andfixed income derivatives. Recently, copula functions have been successfully applied to the database formulation for thereliabilityanalysis of highway bridges, and to various multivariatesimulationstudies in civil engineering,[39]reliability of wind and earthquake engineering,[40]and mechanical & offshore engineering.[41]Researchers are also trying these functions in the field of transportation to understand the interaction between behaviors of individual drivers which, in totality, shapes traffic flow. Copulas are being used forreliabilityanalysis of complex systems of machine components with competing failure modes.[42] Copulas are being used forwarrantydata analysis in which the tail dependence is analysed.[43] Copulas are used in modelling turbulent partially premixed combustion, which is common in practical combustors.[44][45] Copulæ have many applications in the area ofmedicine, for example, The combination of SSA and copula-based methods have been applied for the first time as a novel stochastic tool for Earth Orientation Parameters prediction.[60][61] Copulas have been used in both theoretical and applied analyses of hydroclimatic data. Theoretical studies adopted the copula-based methodology for instance to gain a better understanding of the dependence structures of temperature and precipitation, in different parts of the world.[9][62][63]Applied studies adopted the copula-based methodology to examine e.g., agricultural droughts[64]or joint effects of temperature and precipitation extremes on vegetation growth.[65] Copulas have been extensively used in climate- and weather-related research.[66][67] Copulas have been used to estimate thesolar irradiancevariability in spatial networks and temporally for single locations.[68][69] Large synthetic traces of vectors and stationary time series can be generated using empirical copula while preserving the entire dependence structure of small datasets.[70]Such empirical traces are useful in various simulation-based performance studies.[71] Copulas have been used for quality ranking in the manufacturing of electronically commutated motors.[72] Copulas are important because they represent a dependence structure without usingmarginal distributions. Copulas have been widely used in the field offinance, but their use insignal processingis relatively new. Copulas have been employed in the field ofwirelesscommunicationfor classifyingradarsignals, change detection inremote sensingapplications, andEEGsignal processinginmedicine. In this section, a short mathematical derivation to obtain copula density function followed by a table providing a list of copula density functions with the relevant signal processing applications are presented. Copulas have been used for determining the core radio luminosity function of Active galactic Nuclei (AGNs),[73]while this cannot be realized using traditional methods due to the difficulties in sample completeness. For any two random variablesXandY, the continuous jointprobability distributionfunction can be written as whereFX(x)=Pr{X≤x}{\textstyle F_{X}(x)=\Pr {\begin{Bmatrix}X\leq {x}\end{Bmatrix}}}andFY(y)=Pr{Y≤y}{\textstyle F_{Y}(y)=\Pr {\begin{Bmatrix}Y\leq {y}\end{Bmatrix}}}are the marginal cumulative distribution functions of the random variablesXandY, respectively. then the copula distribution functionC(u,v){\displaystyle C(u,v)}can be defined using Sklar's theorem[74][6]as: whereu=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distribution functions,FXY(x,y){\displaystyle F_{XY}(x,y)}joint andu,v∈(0,1){\displaystyle u,v\in (0,1)}. AssumingFXY(⋅,⋅){\displaystyle F_{XY}(\cdot ,\cdot )}is a.e. twice differentiable, we start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives. wherec(u,v){\displaystyle c(u,v)}is the copula density function,fX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are the marginal probability density functions ofXandY, respectively. There are four elements in this equation, and if any three elements are known, the fourth element can be calculated. For example, it may be used, Various bivariate copula density functions are important in the area of signal processing.u=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distributions functions andfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are marginal density functions. Extension and generalization of copulas for statistical signal processing have been shown to construct new bivariate copulas for exponential, Weibull, and Rician distributions. Zeng et al.[75]presented algorithms, simulation, optimal selection, and practical applications of these copulas in signal processing. validating biometric authentication,[77]modeling stochastic dependence in large-scale integration of wind power,[78]unsupervised classification of radar signals[79] fusion of correlated sensor decisions[92]
https://en.wikipedia.org/wiki/Copula_(statistics)
Inprobability theoryandstatistics, a collection ofrandom variablesisindependent and identically distributed(i.i.d.,iid, orIID) if each random variable has the sameprobability distributionas the others and all are mutuallyindependent.[1]IID was first defined in statistics and finds application in many fields, such asdata miningandsignal processing. Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly. More formally, it is "a sequence ofindependent, identically distributed (IID)random data points." In other words, the termsrandom sampleandIIDare synonymous. In statistics, "random sample" is the typical terminology, but in probability, it is more common to say "IID." Independent and identically distributed random variables are often used as an assumption, which tends to simplify the underlying mathematics. In practical applications ofstatistical modeling, however, this assumption may or may not be realistic.[3] Thei.i.d.assumption is also used in thecentral limit theorem, which states that the probability distribution of the sum (or average) of i.i.d. variables with finitevarianceapproaches anormal distribution.[4] Thei.i.d.assumption frequently arises in the context of sequences of random variables. Then, "independent and identically distributed" implies that an element in the sequence is independent of the random variables that came before it. In this way, an i.i.d. sequence is different from aMarkov sequence, where the probability distribution for thenth random variable is a function of the previous random variable in the sequence (for a first-order Markov sequence). An i.i.d. sequence does not imply the probabilities for all elements of thesample spaceor event space must be the same.[5]For example, repeated throws of loaded dice will produce a sequence that is i.i.d., despite the outcomes being biased. Insignal processingandimage processing, the notion of transformation to i.i.d. implies two specifications, the "i.d." part and the "i." part: i.d. – The signal level must be balanced on the time axis. i. – The signal spectrum must be flattened, i.e. transformed by filtering (such asdeconvolution) to awhite noisesignal (i.e. a signal where all frequencies are equally present). Suppose that the random variablesX{\displaystyle X}andY{\displaystyle Y}are defined to assume values inI⊆R{\displaystyle I\subseteq \mathbb {R} }. LetFX(x)=P⁡(X≤x){\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)}andFY(y)=P⁡(Y≤y){\displaystyle F_{Y}(y)=\operatorname {P} (Y\leq y)}be thecumulative distribution functionsofX{\displaystyle X}andY{\displaystyle Y}, respectively, and denote theirjoint cumulative distribution functionbyFX,Y(x,y)=P⁡(X≤x∧Y≤y){\displaystyle F_{X,Y}(x,y)=\operatorname {P} (X\leq x\land Y\leq y)}. Two random variablesX{\displaystyle X}andY{\displaystyle Y}areindependentif and only ifFX,Y(x,y)=FX(x)⋅FY(y){\displaystyle F_{X,Y}(x,y)=F_{X}(x)\cdot F_{Y}(y)}for allx,y∈I{\displaystyle x,y\in I}. (For the simpler case of events, two eventsA{\displaystyle A}andB{\displaystyle B}are independent if and only ifP(A∧B)=P(A)⋅P(B){\displaystyle P(A\land B)=P(A)\cdot P(B)}, see alsoIndependence (probability theory) § Two random variables.) Two random variablesX{\displaystyle X}andY{\displaystyle Y}areidentically distributedif and only ifFX(x)=FY(x){\displaystyle F_{X}(x)=F_{Y}(x)}for allx∈I{\displaystyle x\in I}.[6] Two random variablesX{\displaystyle X}andY{\displaystyle Y}arei.i.d.if they are independentandidentically distributed, i.e. if and only if FX(x)=FY(x)∀x∈IFX,Y(x,y)=FX(x)⋅FY(y)∀x,y∈I{\displaystyle {\begin{aligned}&F_{X}(x)=F_{Y}(x)\,&\forall x\in I\\&F_{X,Y}(x,y)=F_{X}(x)\cdot F_{Y}(y)\,&\forall x,y\in I\end{aligned}}} The definition extends naturally to more than two random variables. We say thatn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}arei.i.d.if they are independent (see furtherIndependence (probability theory) § More than two random variables)andidentically distributed, i.e. if and only if FX1(x)=FXk(x)∀k∈{1,…,n}and∀x∈IFX1,…,Xn(x1,…,xn)=FX1(x1)⋅…⋅FXn(xn)∀x1,…,xn∈I{\displaystyle {\begin{aligned}&F_{X_{1}}(x)=F_{X_{k}}(x)\,&\forall k\in \{1,\ldots ,n\}{\text{ and }}\forall x\in I\\&F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=F_{X_{1}}(x_{1})\cdot \ldots \cdot F_{X_{n}}(x_{n})\,&\forall x_{1},\ldots ,x_{n}\in I\end{aligned}}} whereFX1,…,Xn(x1,…,xn)=P⁡(X1≤x1∧…∧Xn≤xn){\displaystyle F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=\operatorname {P} (X_{1}\leq x_{1}\land \ldots \land X_{n}\leq x_{n})}denotes the joint cumulative distribution function ofX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}. A sequence of outcomes of spins of a fair or unfairroulettewheel isi.i.d. One implication of this is that if the roulette ball lands on "red", for example, 20 times in a row, the next spin is no more or less likely to be "black" than on any other spin (see thegambler's fallacy). Toss a coin 10 times and write down the results into variablesA1,…,A10{\displaystyle A_{1},\ldots ,A_{10}}. Such a sequence of i.i.d. variables is also called aBernoulli process. Roll a die 10 times and save the results into variablesA1,…,A10{\displaystyle A_{1},\ldots ,A_{10}}. Choose a card from a standard deck of cards containing 52 cards, then place the card back in the deck. Repeat this 52 times. Observe when a king appears. Many results that were first proven under the assumption that the random variables arei.i.d. have been shown to be true even under a weakerdistributionalassumption. The most general notion which shares the main properties of i.i.d. variables areexchangeable random variables, introduced byBruno de Finetti.[citation needed]Exchangeability means that while variables may not be independent, future ones behave like past ones — formally, any value of a finite sequence is as likely as anypermutationof those values — thejoint probability distributionis invariant under thesymmetric group. This provides a useful generalization — for example,sampling without replacementis not independent, but is exchangeable. Instochastic calculus, i.i.d. variables are thought of as adiscrete timeLévy process: each variable gives how much one changes from one time to another. For example, a sequence of Bernoulli trials is interpreted as theBernoulli process. One may generalize this to include continuous timeLévy processes, and many Lévy processes can be seen as limits of i.i.d. variables—for instance, theWiener processis the limit of the Bernoulli process. Machine learning(ML) involves learning statistical relationships within data. To train ML models effectively, it is crucial to use data that is broadly generalizable. If thetraining datais insufficiently representative of the task, the model's performance on new, unseen data may be poor. The i.i.d. hypothesis allows for a significant reduction in the number of individual cases required in the training sample, simplifying optimization calculations. In optimization problems, the assumption of independent and identical distribution simplifies the calculation of the likelihood function. Due to this assumption, the likelihood function can be expressed as: l(θ)=P(x1,x2,x3,...,xn|θ)=P(x1|θ)P(x2|θ)P(x3|θ)...P(xn|θ){\displaystyle l(\theta )=P(x_{1},x_{2},x_{3},...,x_{n}|\theta )=P(x_{1}|\theta )P(x_{2}|\theta )P(x_{3}|\theta )...P(x_{n}|\theta )} To maximize the probability of the observed event, the log function is applied to maximize the parameterθ{\textstyle \theta }. Specifically, it computes: argmaxθ⁡log⁡(l(θ)){\displaystyle \mathop {\rm {argmax}} \limits _{\theta }\log(l(\theta ))} where log⁡(l(θ))=log⁡(P(x1|θ))+log⁡(P(x2|θ))+log⁡(P(x3|θ))+...+log⁡(P(xn|θ)){\displaystyle \log(l(\theta ))=\log(P(x_{1}|\theta ))+\log(P(x_{2}|\theta ))+\log(P(x_{3}|\theta ))+...+\log(P(x_{n}|\theta ))} Computers are very efficient at performing multiple additions, but not as efficient at performing multiplications. This simplification enhances computational efficiency. The log transformation, in the process of maximizing, converts many exponential functions into linear functions. There are two main reasons why this hypothesis is practically useful with thecentral limit theorem(CLT):
https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables
Inprobability theory, arandom variableY{\displaystyle Y}is said to bemean independentof random variableX{\displaystyle X}if and only ifitsconditional meanE(Y∣X=x){\displaystyle E(Y\mid X=x)}equals its (unconditional)meanE(Y){\displaystyle E(Y)}for allx{\displaystyle x}such that the probability density/mass ofX{\displaystyle X}atx{\displaystyle x},fX(x){\displaystyle f_{X}(x)}, is not zero. Otherwise,Y{\displaystyle Y}is said to bemean dependentonX{\displaystyle X}. Stochastic independenceimplies mean independence, but the converse is not true.;[1][2]moreover, mean independence implies uncorrelatedness while the converse is not true. Unlike stochastic independence and uncorrelatedness, mean independence is not symmetric: it is possible forY{\displaystyle Y}to be mean-independent ofX{\displaystyle X}even thoughX{\displaystyle X}is mean-dependent onY{\displaystyle Y}. The concept of mean independence is often used ineconometrics[citation needed]to have a middle ground between the strong assumption of independent random variables (X1⊥X2{\displaystyle X_{1}\perp X_{2}}) and the weak assumption of uncorrelated random variables(Cov⁡(X1,X2)=0).{\displaystyle (\operatorname {Cov} (X_{1},X_{2})=0).} Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mean_dependence
Students ofstatisticsandprobability theorysometimes developmisconceptions about the normal distribution,ideas that may seem plausible but are mathematically untrue. For example, it is sometimes mistakenly thought that twolinearly uncorrelated,normally distributedrandom variables must bestatistically independent. However, this is untrue, as can be demonstrated by counterexample. Likewise, it is sometimes mistakenly thought that alinear combinationof normally distributed random variables will itself be normally distributed, but again, counterexamples prove this wrong.[1][2] To say that the pair(X,Y){\displaystyle (X,Y)}of random variables has abivariate normal distributionmeans that everylinear combinationaX+bY{\displaystyle aX+bY}ofX{\displaystyle X}andY{\displaystyle Y}for constant (i.e. not random) coefficientsa{\displaystyle a}andb{\displaystyle b}(not both equal to zero) has a univariate normal distribution. In that case, ifX{\displaystyle X}andY{\displaystyle Y}are uncorrelated then they are independent.[3]However, it is possible for two random variablesX{\displaystyle X}andY{\displaystyle Y}to be so distributed jointly that each one alone is marginally normally distributed, and they are uncorrelated, but they are not independent; examples are given below. SupposeX{\displaystyle X}has a normal distribution withexpected value0 and variance 1. LetW{\displaystyle W}have theRademacher distribution, so thatW=1{\displaystyle W=1}orW=−1{\displaystyle W=-1}, each with probability 1/2, and assumeW{\displaystyle W}is independent ofX{\displaystyle X}. LetY=WX{\displaystyle Y=WX}. ThenX{\displaystyle X}andY{\displaystyle Y}are uncorrelated, as can be verified by calculating theircovariance. Moreover, both have the same normal distribution. And yet,X{\displaystyle X}andY{\displaystyle Y}are not independent.[4][1][5] To see thatX{\displaystyle X}andY{\displaystyle Y}are not independent, observe that|Y|=|X|{\displaystyle |Y|=|X|}or thatPr⁡(Y>1|−1/2<X<1/2)=Pr⁡(X>1|−1/2<X<1/2)=0{\displaystyle \operatorname {Pr} (Y>1|-1/2<X<1/2)=\operatorname {Pr} (X>1|-1/2<X<1/2)=0}. Finally, the distribution of the simple linear combinationX+Y{\displaystyle X+Y}concentrates positive probability at 0:Pr⁡(X+Y=0)=1/2{\displaystyle \operatorname {Pr} (X+Y=0)=1/2}. Therefore, the random variableX+Y{\displaystyle X+Y}is not normally distributed, and so alsoX{\displaystyle X}andY{\displaystyle Y}are not jointly normally distributed (by the definition above).[4] SupposeX{\displaystyle X}has a normal distribution withexpected value0 and variance 1. LetY={Xif|X|≤c−Xif|X|>c{\displaystyle Y=\left\{{\begin{matrix}X&{\text{if }}\left|X\right|\leq c\\-X&{\text{if }}\left|X\right|>c\end{matrix}}\right.}wherec{\displaystyle c}is a positive number to be specified below. Ifc{\displaystyle c}is very small, then thecorrelationcorr⁡(X,Y){\displaystyle \operatorname {corr} (X,Y)}is near−1{\displaystyle -1}ifc{\displaystyle c}is very large, thencorr⁡(X,Y){\displaystyle \operatorname {corr} (X,Y)}is near 1. Since the correlation is acontinuous functionofc{\displaystyle c}, theintermediate value theoremimplies there is some particular value ofc{\displaystyle c}that makes the correlation 0. That value is approximately 1.54.[2][note 1]In that case,X{\displaystyle X}andY{\displaystyle Y}are uncorrelated, but they are clearly not independent, sinceX{\displaystyle X}completely determinesY{\displaystyle Y}. To see thatY{\displaystyle Y}is normally distributed—indeed, that its distribution is the same as that ofX{\displaystyle X}—one may compute itscumulative distribution function:[6]Pr(Y≤x)=Pr({|X|≤candX≤x}or{|X|>cand−X≤x})=Pr(|X|≤candX≤x)+Pr(|X|>cand−X≤x)=Pr(|X|≤candX≤x)+Pr(|X|>candX≤x)=Pr(X≤x),{\displaystyle {\begin{aligned}\Pr(Y\leq x)&=\Pr(\{|X|\leq c{\text{ and }}X\leq x\}{\text{ or }}\{|X|>c{\text{ and }}-X\leq x\})\\&=\Pr(|X|\leq c{\text{ and }}X\leq x)+\Pr(|X|>c{\text{ and }}-X\leq x)\\&=\Pr(|X|\leq c{\text{ and }}X\leq x)+\Pr(|X|>c{\text{ and }}X\leq x)\\&=\Pr(X\leq x),\end{aligned}}} where the next-to-last equality follows from the symmetry of the distribution ofX{\displaystyle X}and the symmetry of the condition that|X|≤c{\displaystyle |X|\leq c}. In this example, the differenceX−Y{\displaystyle X-Y}is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 0. By contrast, the normal distribution, being a continuous distribution, has no discrete part—that is, it does not concentrate more than zero probability at any single point. ConsequentlyX{\displaystyle X}andY{\displaystyle Y}are notjointlynormally distributed, even though they are separately normally distributed.[2] Suppose that the coordinates(X,Y){\displaystyle (X,Y)}of a random point in the plane are chosen according to the probability density functionp(x,y)=12π3[exp⁡(−23(x2+xy+y2))+exp⁡(−23(x2−xy+y2))].{\displaystyle p(x,y)={\frac {1}{2\pi {\sqrt {3}}}}\left[\exp \left(-{\frac {2}{3}}(x^{2}+xy+y^{2})\right)+\exp \left(-{\frac {2}{3}}(x^{2}-xy+y^{2})\right)\right].}Then the random variablesX{\displaystyle X}andY{\displaystyle Y}are uncorrelated, and each of them is normally distributed (with mean 0 and variance 1), but they are not independent.[7]: 93 It is well-known that the ratioC{\displaystyle C}of two independent standard normal random deviatesXi{\displaystyle X_{i}}andYi{\displaystyle Y_{i}}has aCauchy distribution.[8][9][7]: 122One can equally well start with the Cauchy random variableC{\displaystyle C}and derive the conditional distribution ofYi{\displaystyle Y_{i}}to satisfy the requirement thatXi=CYi{\displaystyle X_{i}=CY_{i}}withXi{\displaystyle X_{i}}andYi{\displaystyle Y_{i}}independent and standard normal. It follows thatYi=Wiχi2(k=2)1+C2{\displaystyle Y_{i}=W_{i}{\sqrt {\frac {\chi _{i}^{2}\left(k=2\right)}{1+C^{2}}}}}in whichWi{\displaystyle W_{i}}is a Rademacher random variable andχi2(k=2){\displaystyle \chi _{i}^{2}\left(k=2\right)}is aChi-squared random variablewith two degrees of freedom. Consider two sets of(Xi,Yi){\displaystyle \left(X_{i},Y_{i}\right)},i∈{1,2}{\displaystyle i\in \left\{1,2\right\}}. Note thatC{\displaystyle C}is not indexed byi{\displaystyle i}– that is, the same Cauchy random variableC{\displaystyle C}is used in the definition of both(X1,Y1){\displaystyle \left(X_{1},Y_{1}\right)}and(X2,Y2){\displaystyle \left(X_{2},Y_{2}\right)}. This sharing ofC{\displaystyle C}results in dependences across indices: neitherX1{\displaystyle X_{1}}norY1{\displaystyle Y_{1}}is independent ofY2{\displaystyle Y_{2}}. Nevertheless all of theXi{\displaystyle X_{i}}andYi{\displaystyle Y_{i}}are uncorrelated as the bivariate distributions all have reflection symmetry across the axes.[citation needed] The figure shows scatterplots of samples drawn from the above distribution. This furnishes two examples of bivariate distributions that are uncorrelated and have normal marginal distributions but are not independent. The left panel shows the joint distribution ofX1{\displaystyle X_{1}}andY2{\displaystyle Y_{2}}; the distribution has support everywhere but at the origin. The right panel shows the joint distribution ofY1{\displaystyle Y_{1}}andY2{\displaystyle Y_{2}}; the distribution has support everywhere except along the axes and has a discontinuity at the origin: the density diverges when the origin is approached along any straight path except along the axes.
https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent
Instatistics, acentral tendency(ormeasure of central tendency) is a central or typical value for aprobability distribution.[1] Colloquially, measures of central tendency are often calledaverages.The termcentral tendencydates from the late 1920s.[2] The most common measures of central tendency are thearithmetic mean, themedian, and themode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as thenormal distribution. Occasionally authors use central tendency to denote "the tendency of quantitativedatato cluster around some central value."[2][3] The central tendency of a distribution is typically contrasted with itsdispersionorvariability; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion. The following may be applied to one-dimensional data. Depending on the circumstances, it may be appropriate to transform the data before calculating a central tendency. Examples are squaring the values or taking logarithms. Whether a transformation is appropriate and what it should be, depend heavily on the data being analyzed. Any of the above may be applied to each dimension of multi-dimensional data, but the results may not be invariant to rotations of the multi-dimensional space. Several measures of central tendency can be characterized as solving a variational problem, in the sense of thecalculus of variations, namely minimizing variation from the center. That is, given a measure ofstatistical dispersion, one asks for a measure of central tendency that minimizes variation: such that variation from the center is minimal among all choices of center. In a quip, "dispersion precedes location". These measures are initially defined in one dimension, but can be generalized to multiple dimensions. This center may or may not be unique. In the sense ofLpspaces, the correspondence is: The associated functions are calledp-norms: respectively 0-"norm", 1-norm, 2-norm, and ∞-norm. The function corresponding to theL0space is not a norm, and is thus often referred to in quotes: 0-"norm". In equations, for a given (finite) data setX, thought of as a vectorx= (x1,…,xn), the dispersion about a pointcis the "distance" fromxto the constant vectorc= (c,…,c)in thep-norm (normalized by the number of pointsn): Forp= 0andp = ∞these functions are defined by taking limits, respectively asp→ 0andp→ ∞. Forp= 0the limiting values are00= 0anda0= 1fora≠ 0, so the difference becomes simply equality, so the 0-norm counts the number ofunequalpoints. Forp= ∞the largest number dominates, and thus the ∞-norm is the maximum difference. The mean (L2center) and midrange (L∞center) are unique (when they exist), while the median (L1center) and mode (L0center) are not in general unique. This can be understood in terms ofconvexityof the associated functions (coercive functions). The 2-norm and ∞-norm arestrictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point. The 1-norm is notstrictlyconvex, whereas strict convexity is needed to ensure uniqueness of the minimizer. Correspondingly, the median (in this sense of minimizing) is not in general unique, and in fact any point between the two central points of a discrete distribution minimizes average absolute deviation. The 0-"norm" is not convex (hence not a norm). Correspondingly, the mode is not unique – for example, in a uniform distributionanypoint is the mode. Instead of a single central point, one can ask for multiple points such that the variation from these points is minimized. This leads tocluster analysis, where each point in the data set is clustered with the nearest "center". Most commonly, using the 2-norm generalizes the mean tok-means clustering, while using the 1-norm generalizes the (geometric) median tok-medians clustering. Using the 0-norm simply generalizes the mode (most common value) to using thekmost common values as centers. Unlike the single-center statistics, this multi-center clustering cannot in general be computed in aclosed-form expression, and instead must be computed or approximated by aniterative method; one general approach isexpectation–maximization algorithms. The notion of a "center" as minimizing variation can be generalized ininformation geometryas a distribution that minimizesdivergence(a generalized distance) from a data set. The most common case ismaximum likelihood estimation, where the maximum likelihood estimate (MLE) maximizes likelihood (minimizes expectedsurprisal), which can be interpreted geometrically by usingentropyto measure variation: the MLE minimizescross-entropy(equivalently,relative entropy, Kullback–Leibler divergence). A simple example of this is for the center of nominal data: instead of using the mode (the only single-valued "center"), one often uses theempirical measure(thefrequency distributiondivided by thesample size) as a "center". For example, givenbinary data, say heads or tails, if a data set consists of 2 heads and 1 tails, then the mode is "heads", but the empirical measure is 2/3 heads, 1/3 tails, which minimizes the cross-entropy (total surprisal) from the data set. This perspective is also used inregression analysis, whereleast squaresfinds the solution that minimizes the distances from it, and analogously inlogistic regression, a maximum likelihood estimate minimizes the surprisal (information distance). Forunimodal distributionsthe following bounds are known and are sharp:[4] whereμis the mean,νis the median,θis the mode, andσis the standard deviation. For every distribution,[5][6]
https://en.wikipedia.org/wiki/Central_tendency
Inprobability theory, theconditional expectation,conditional expected value, orconditional meanof arandom variableis itsexpected valueevaluated with respect to theconditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discreteprobability space, the "conditions" are apartitionof this probability space. Depending on the context, the conditional expectation can be either a random variable or a function. The random variable is denotedE(X∣Y){\displaystyle E(X\mid Y)}analogously toconditional probability. The function form is either denotedE(X∣Y=y){\displaystyle E(X\mid Y=y)}or a separate function symbol such asf(y){\displaystyle f(y)}is introduced with the meaningE(X∣Y)=f(Y){\displaystyle E(X\mid Y)=f(Y)}. Consider the roll of a fair die and letA= 1 if the number is even (i.e., 2, 4, or 6) andA= 0 otherwise. Furthermore, letB= 1 if the number is prime (i.e., 2, 3, or 5) andB= 0 otherwise. The unconditional expectation of A isE[A]=(0+1+0+1+0+1)/6=1/2{\displaystyle E[A]=(0+1+0+1+0+1)/6=1/2}, but the expectation of AconditionalonB= 1 (i.e., conditional on the die roll being 2, 3, or 5) isE[A∣B=1]=(1+0+0)/3=1/3{\displaystyle E[A\mid B=1]=(1+0+0)/3=1/3}, and the expectation of A conditional onB= 0 (i.e., conditional on the die roll being 1, 4, or 6) isE[A∣B=0]=(0+1+1)/3=2/3{\displaystyle E[A\mid B=0]=(0+1+1)/3=2/3}. Likewise, the expectation of B conditional on A = 1 isE[B∣A=1]=(1+0+0)/3=1/3{\displaystyle E[B\mid A=1]=(1+0+0)/3=1/3}, and the expectation ofBconditional onA= 0 isE[B∣A=0]=(0+1+1)/3=2/3{\displaystyle E[B\mid A=0]=(0+1+1)/3=2/3}. Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten-year (3652-day) period from January 1, 1990, to December 31, 1999. The unconditional expectation of rainfall for an unspecified day is the average of the rainfall amounts for those 3652 days. Theconditionalexpectation of rainfall for an otherwise unspecified day known to be (conditional on being) in the month of March, is the average of daily rainfall over all 310 days of the ten–year period that fall in March. Similarly, the conditional expectation of rainfall conditional on days dated March 2 is the average of the rainfall amounts that occurred on the ten days with that specific date. The related concept ofconditional probabilitydates back at least toLaplace, who calculated conditional distributions. It wasAndrey Kolmogorovwho, in 1933, formalized it using theRadon–Nikodym theorem.[1]In works ofPaul Halmos[2]andJoseph L. Doob[3]from 1953, conditional expectation was generalized to its modern definition usingsub-σ-algebras.[4] IfAis an event inF{\displaystyle {\mathcal {F}}}with nonzero probability, andXis adiscrete random variable, the conditional expectation ofXgivenAis where the sum is taken over all possible outcomes ofX. IfP(A)=0{\displaystyle P(A)=0}, the conditional expectation is undefined due to the division by zero. IfXandYarediscrete random variables, the conditional expectation ofXgivenYis whereP(X=x,Y=y){\displaystyle P(X=x,Y=y)}is thejoint probability mass functionofXandY. The sum is taken over all possible outcomes ofX. Remark that as above the expression is undefined ifP(Y=y)=0{\displaystyle P(Y=y)=0}. Conditioning on a discrete random variable is the same as conditioning on the corresponding event: whereAis the set{Y=y}{\displaystyle \{Y=y\}}. LetX{\displaystyle X}andY{\displaystyle Y}becontinuous random variableswith joint densityfX,Y(x,y),{\displaystyle f_{X,Y}(x,y),}Y{\displaystyle Y}'s densityfY(y),{\displaystyle f_{Y}(y),}and conditional densityfX∣Y(x∣y)=fX,Y(x,y)fY(y){\displaystyle \textstyle f_{X\mid Y}(x\mid y)={\frac {f_{X,Y}(x,y)}{f_{Y}(y)}}}ofX{\displaystyle X}given the eventY=y.{\displaystyle Y=y.}The conditional expectation ofX{\displaystyle X}givenY=y{\displaystyle Y=y}is When the denominator is zero, the expression is undefined. Conditioning on a continuous random variable is not the same as conditioning on the event{Y=y}{\displaystyle \{Y=y\}}as it was in the discrete case. For a discussion, seeConditioning on an event of probability zero. Not respecting this distinction can lead to contradictory conclusions as illustrated by theBorel-Kolmogorov paradox. All random variables in this section are assumed to be inL2{\displaystyle L^{2}}, that issquare integrable. In its full generality, conditional expectation is developed without this assumption, see below underConditional expectation with respect to a sub-σ-algebra. TheL2{\displaystyle L^{2}}theory is, however, considered more intuitive[5]and admitsimportant generalizations. In the context ofL2{\displaystyle L^{2}}random variables, conditional expectation is also calledregression. In what follows let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be a probability space, andX:Ω→R{\displaystyle X:\Omega \to \mathbb {R} }inL2{\displaystyle L^{2}}with meanμX{\displaystyle \mu _{X}}andvarianceσX2{\displaystyle \sigma _{X}^{2}}. The expectationμX{\displaystyle \mu _{X}}minimizes themean squared error: The conditional expectation ofXis defined analogously, except instead of a single numberμX{\displaystyle \mu _{X}}, the result will be a functioneX(y){\displaystyle e_{X}(y)}. LetY:Ω→Rn{\displaystyle Y:\Omega \to \mathbb {R} ^{n}}be arandom vector. The conditional expectationeX:Rn→R{\displaystyle e_{X}:\mathbb {R} ^{n}\to \mathbb {R} }is a measurable function such that Note that unlikeμX{\displaystyle \mu _{X}}, the conditional expectationeX{\displaystyle e_{X}}is not generally unique: there may be multiple minimizers of the mean squared error. Example 1: Consider the case whereYis the constant random variable that is always 1. Then the mean squared error is minimized by any function of the form Example 2: Consider the case whereYis the 2-dimensional random vector(X,2X){\displaystyle (X,2X)}. Then clearly but in terms of functions it can be expressed aseX(y1,y2)=3y1−y2{\displaystyle e_{X}(y_{1},y_{2})=3y_{1}-y_{2}}oreX′(y1,y2)=y2−y1{\displaystyle e'_{X}(y_{1},y_{2})=y_{2}-y_{1}}or infinitely many other ways. In the context oflinear regression, this lack of uniqueness is calledmulticollinearity. Conditional expectation is unique up to a set of measure zero inRn{\displaystyle \mathbb {R} ^{n}}. The measure used is thepushforward measureinduced byY. In the first example, the pushforward measure is aDirac distributionat 1. In the second it is concentrated on the "diagonal"{y:y2=2y1}{\displaystyle \{y:y_{2}=2y_{1}\}}, so that any set not intersecting it has measure 0. The existence of a minimizer formingE⁡((X−g(Y))2){\displaystyle \min _{g}\operatorname {E} \left((X-g(Y))^{2}\right)}is non-trivial. It can be shown that is a closed subspace of the Hilbert spaceL2(Ω){\displaystyle L^{2}(\Omega )}.[6]By theHilbert projection theorem, the necessary and sufficient condition foreX{\displaystyle e_{X}}to be a minimizer is that for allf(Y){\displaystyle f(Y)}inMwe have In words, this equation says that theresidualX−eX(Y){\displaystyle X-e_{X}(Y)}is orthogonal to the spaceMof all functions ofY. This orthogonality condition, applied to theindicator functionsf(Y)=1Y∈H{\displaystyle f(Y)=1_{Y\in H}}, is used below to extend conditional expectation to the case thatXandYare not necessarily inL2{\displaystyle L^{2}}. The conditional expectation is often approximated inapplied mathematicsandstatisticsdue to the difficulties in analytically calculating it, and for interpolation.[7] The Hilbert subspace defined above is replaced with subsets thereof by restricting the functional form ofg, rather than allowing any measurable function. Examples of this aredecision tree regressionwhengis required to be asimple function,linear regressionwhengis required to beaffine, etc. These generalizations of conditional expectation come at the cost of many ofits propertiesno longer holding. For example, letMbe the space of all linear functions ofYand letEM{\displaystyle {\mathcal {E}}_{M}}denote this generalized conditional expectation/L2{\displaystyle L^{2}}projection. IfM{\displaystyle M}does not contain theconstant functions, thetower propertyE⁡(EM(X))=E⁡(X){\displaystyle \operatorname {E} ({\mathcal {E}}_{M}(X))=\operatorname {E} (X)}will not hold. An important special case is whenXandYare jointly normally distributed. In this case it can be shown that the conditional expectation is equivalent to linear regression: for coefficients{αi}i=0..n{\displaystyle \{\alpha _{i}\}_{i=0..n}}described inMultivariate normal distribution#Conditional distributions. Consider the following: SinceH{\displaystyle {\mathcal {H}}}is a subσ{\displaystyle \sigma }-algebra ofF{\displaystyle {\mathcal {F}}}, the functionX:Ω→Rn{\displaystyle X\colon \Omega \to \mathbb {R} ^{n}}is usually notH{\displaystyle {\mathcal {H}}}-measurable, thus the existence of the integrals of the form∫HXdP|H{\textstyle \int _{H}X\,dP|_{\mathcal {H}}}, whereH∈H{\displaystyle H\in {\mathcal {H}}}andP|H{\displaystyle P|_{\mathcal {H}}}is the restriction ofP{\displaystyle P}toH{\displaystyle {\mathcal {H}}}, cannot be stated in general. However, the local averages∫HXdP{\textstyle \int _{H}X\,dP}can be recovered in(Ω,H,P|H){\displaystyle (\Omega ,{\mathcal {H}},P|_{\mathcal {H}})}with the help of the conditional expectation. Aconditional expectationofXgivenH{\displaystyle {\mathcal {H}}}, denoted asE⁡(X∣H){\displaystyle \operatorname {E} (X\mid {\mathcal {H}})}, is anyH{\displaystyle {\mathcal {H}}}-measurable functionΩ→Rn{\displaystyle \Omega \to \mathbb {R} ^{n}}which satisfies: for eachH∈H{\displaystyle H\in {\mathcal {H}}}.[8] As noted in theL2{\displaystyle L^{2}}discussion, this condition is equivalent to saying that theresidualX−E⁡(X∣H){\displaystyle X-\operatorname {E} (X\mid {\mathcal {H}})}is orthogonal to the indicator functions1H{\displaystyle 1_{H}}: The existence ofE⁡(X∣H){\displaystyle \operatorname {E} (X\mid {\mathcal {H}})}can be established by noting thatμX:F↦∫FXdP{\textstyle \mu ^{X}\colon F\mapsto \int _{F}X\,\mathrm {d} P}forF∈F{\displaystyle F\in {\mathcal {F}}}is a finite measure on(Ω,F){\displaystyle (\Omega ,{\mathcal {F}})}that isabsolutely continuouswith respect toP{\displaystyle P}. Ifh{\displaystyle h}is thenatural injectionfromH{\displaystyle {\mathcal {H}}}toF{\displaystyle {\mathcal {F}}}, thenμX∘h=μX|H{\displaystyle \mu ^{X}\circ h=\mu ^{X}|_{\mathcal {H}}}is the restriction ofμX{\displaystyle \mu ^{X}}toH{\displaystyle {\mathcal {H}}}andP∘h=P|H{\displaystyle P\circ h=P|_{\mathcal {H}}}is the restriction ofP{\displaystyle P}toH{\displaystyle {\mathcal {H}}}. Furthermore,μX∘h{\displaystyle \mu ^{X}\circ h}is absolutely continuous with respect toP∘h{\displaystyle P\circ h}, because the condition implies Thus, we have where the derivatives areRadon–Nikodym derivativesof measures. Consider, in addition to the above, The conditional expectation ofXgivenYis defined by applying the above construction on theσ-algebra generated byY: By theDoob–Dynkin lemma, there exists a functioneX:U→Rn{\displaystyle e_{X}\colon U\to \mathbb {R} ^{n}}such that For a Borel subsetBinB(Rn){\displaystyle {\mathcal {B}}(\mathbb {R} ^{n})}, one can consider the collection of random variables It can be shown that they form aMarkov kernel, that is, for almost allω{\displaystyle \omega },κH(ω,−){\displaystyle \kappa _{\mathcal {H}}(\omega ,-)}is a probability measure.[9] TheLaw of the unconscious statisticianis then This shows that conditional expectations are, like their unconditional counterparts, integrations, against a conditional measure. In full generality, consider: Theconditional expectationofX{\displaystyle X}givenH{\displaystyle {\mathcal {H}}}is the up to aP{\displaystyle P}-nullset unique and integrableE{\displaystyle E}-valuedH{\displaystyle {\mathcal {H}}}-measurable random variableE⁡(X∣H){\displaystyle \operatorname {E} (X\mid {\mathcal {H}})}satisfying for allH∈H{\displaystyle H\in {\mathcal {H}}}.[10][11] In this setting the conditional expectation is sometimes also denoted in operator notation asEH⁡X{\displaystyle \operatorname {E} ^{\mathcal {H}}X}. All the following formulas are to be understood in an almost sure sense. Theσ-algebraH{\displaystyle {\mathcal {H}}}could be replaced by a random variableZ{\displaystyle Z}, i.e.H=σ(Z){\displaystyle {\mathcal {H}}=\sigma (Z)}. LetB∈H{\displaystyle B\in {\mathcal {H}}}. ThenX{\displaystyle X}is independent of1B{\displaystyle 1_{B}}, so we get that Thus the definition of conditional expectation is satisfied by the constant random variableE(X){\displaystyle E(X)}, as desired.◻{\displaystyle \square } For eachH∈H{\displaystyle H\in {\mathcal {H}}}we have∫HE(X∣H)dP=∫HXdP{\displaystyle \int _{H}E(X\mid {\mathcal {H}})\,dP=\int _{H}X\,dP}, or equivalently Since this is true for eachH∈H{\displaystyle H\in {\mathcal {H}}}, and bothE(X∣H){\displaystyle E(X\mid {\mathcal {H}})}andX{\displaystyle X}areH{\displaystyle {\mathcal {H}}}-measurable (the former property holds by definition; the latter property is key here), from this one can show And this impliesE(X∣H)=X{\displaystyle E(X\mid {\mathcal {H}})=X}almost everywhere.◻{\displaystyle \square } All random variables here are assumed without loss of generality to be non-negative. The general case can be treated withX=X+−X−{\displaystyle X=X^{+}-X^{-}}. FixA∈H{\displaystyle A\in {\mathcal {H}}}and letX=1A{\displaystyle X=1_{A}}. Then for anyH∈H{\displaystyle H\in {\mathcal {H}}} HenceE(1AY∣H)=1AE(Y∣H){\displaystyle E(1_{A}Y\mid {\mathcal {H}})=1_{A}E(Y\mid {\mathcal {H}})}almost everywhere. Any simple function is a finite linear combination of indicator functions. By linearity the above property holds for simple functions: ifXn{\displaystyle X_{n}}is a simple function thenE(XnY∣H)=XnE(Y∣H){\displaystyle E(X_{n}Y\mid {\mathcal {H}})=X_{n}\,E(Y\mid {\mathcal {H}})}. Now letX{\displaystyle X}beH{\displaystyle {\mathcal {H}}}-measurable. Then there exists a sequence of simple functions{Xn}n≥1{\displaystyle \{X_{n}\}_{n\geq 1}}converging monotonically (here meaningXn≤Xn+1{\displaystyle X_{n}\leq X_{n+1}}) and pointwise toX{\displaystyle X}. Consequently, forY≥0{\displaystyle Y\geq 0}, the sequence{XnY}n≥1{\displaystyle \{X_{n}Y\}_{n\geq 1}}converges monotonically and pointwise toXY{\displaystyle XY}. Also, sinceE(Y∣H)≥0{\displaystyle E(Y\mid {\mathcal {H}})\geq 0}, the sequence{XnE(Y∣H)}n≥1{\displaystyle \{X_{n}E(Y\mid {\mathcal {H}})\}_{n\geq 1}}converges monotonically and pointwise toXE(Y∣H){\displaystyle X\,E(Y\mid {\mathcal {H}})} Combining the special case proved for simple functions, the definition of conditional expectation, and deploying the monotone convergence theorem: This holds for allH∈H{\displaystyle H\in {\mathcal {H}}}, whenceXE(Y∣H)=E(XY∣H){\displaystyle X\,E(Y\mid {\mathcal {H}})=E(XY\mid {\mathcal {H}})}almost everywhere.◻{\displaystyle \square }
https://en.wikipedia.org/wiki/Conditional_expectation
In the case ofuncertainty,expectationis what is considered the most likely to happen. An expectation, which is abeliefthat is centered on thefuture, may or may not be realistic. A less advantageous result gives rise to theemotionofdisappointment. If something happens that is not at all expected, it is asurprise. An expectation about the behavior or performance of another person, expressed to that person, may have the nature of a strong request, or an order; this kind of expectation is called asocial norm. The degree to which something is expected to be true can be expressed usingfuzzy logic.Anticipationis the emotion corresponding to expectation. Richard Lazarusasserts that people become accustomed to positive or negative life experiences which lead to favorable or unfavorable expectations of their present and near-future circumstances. Lazarus notes the widely accepted philosophical principle that "happiness depends on the background psychological status of the person...and cannot be well predicted without reference to one's expectations."[1] With regard to happiness or unhappiness, Lazarus notes that "people whose objective conditions of life are those of hardship and deprivation often make a positive assessment of their well-being," while "people who are objectively well off...often make a negative assessment of their well-being." Lazarus argues that "the most sensible explanation of this apparent paradox is that people...develop favorable or unfavorableexpectations" that guide such assessments.[1] Irving Kirsch, a renowned psychological researcher, writes about "response-expectancies" which are: expectations about non-volitionalresponses. For example, science commonly takes into account "placebo-effects" when testing for new drugs, against subjects expectations of those drugs: for example, if you expect to receive a drug that may help with depression, and you feel better after taking it, but the drug is just a salt-tablet (better known as aplacebo), then the benefit of feeling better (i.e. your non-volitional response), would be based on your expectations rather than any properties of the placebo (i.e. the salt-tablet).[2] SociologistRobert K. Mertonwrote that a person's expectation is directly linked toself-fulfilling prophecy. Whether or not such an expectation is truthful or not, has little or no effect on the outcome. If a person believes what they are told or convinces himself/herself of the fact, chances are this person will see the expectation to its inevitable conclusion. There is an inherent danger in this kind of labeling especially for the educator. Since children are easily convinced of certain tenets especially when told to them by an authority figure like a parent or teacher, they may believe whatever is taught to them even if what is taught has no factual basis. If the student or child were to act on false information, certain positive or negativeunintended consequencescould result. If overly positive or elevated expectations were used to describe or manipulate a person'sself-imageand execution falls short, the results could be a total reversal of that person'sself-confidence. If thought of in terms ofcausalityor cause and effect, the higher a person's expectation and the lower the execution, the higher the frustration level may become. This in turn could cause a total cessation of effort and motivate the person to quit.[citation needed] Expectations are a central part of value calculations in economics. For example, calculating theSubjective expected utilityof an outcome requires knowing both the value of an outcome and the probability that it will occur. Researchers who elicit (or measure) the expectations of individuals can input these beliefs into the model in place of standard probabilities. The strategy of eliciting individual expectations is now incorporated into many international surveys, including theHealth and Retirement Studyin the United States. Expectations elicitation is used in many domains, including survival and educational outcomes, but may be most prominent in financial realms. Expectations are theoretically important for models such as theEfficient-market hypothesiswhich suggest that all information should be incorporated into the market, as well as forModern portfolio theorywhich suggests that investors must be compensated for higher levels of risk through higher (expected) returns. Following these models, empirical research has found that consumers with more optimistic stock market expectations are more likely to hold riskier assets,[3]and acquire stocks in the near future.[4]Given these promising findings, more recent research in psychology has begun to explore what factors drive consumers' expectations by exploring what factors come to mind when forming stock market expectations.[5]
https://en.wikipedia.org/wiki/Expectation_(epistemic)
In the mathematical theory ofprobability, theexpectilesof aprobability distributionare related to theexpected valueof the distribution in a way analogous to that in which thequantilesof the distribution are related to themedian. Forτ∈(0,1){\textstyle \tau \in (0,1)}, the expectile of the probability distribution with cumulative distribution functionF{\textstyle F}is characterized by any of the following equivalent conditions:[1][2][3] Quantile regressionminimizes an asymmetricL1{\displaystyle L_{1}}loss (seeleast absolute deviations). Analogously, expectile regression minimizes an asymmetricL2{\displaystyle L_{2}}loss (seeordinary least squares): whereH{\displaystyle H}is theHeaviside step function.
https://en.wikipedia.org/wiki/Expectile
The proposition inprobability theoryknown as thelaw of total expectation,[1]thelaw of iterated expectations[2](LIE),Adam's law,[3]thetower rule,[4]and thesmoothing property of conditional expectation,[5]among other names, states that ifX{\displaystyle X}is arandom variablewhose expected valueE⁡(X){\displaystyle \operatorname {E} (X)}is defined, andY{\displaystyle Y}is any random variable on the sameprobability space, then i.e., theexpected valueof theconditional expected valueofX{\displaystyle X}givenY{\displaystyle Y}is the same as the expected value ofX{\displaystyle X}. Theconditional expected valueE⁡(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}, withY{\displaystyle Y}a random variable, is not a simple number; it is a random variable whose value depends on the value ofY{\displaystyle Y}. That is, the conditional expected value ofX{\displaystyle X}given theeventY=y{\displaystyle Y=y}is a number and it is a function ofy{\displaystyle y}. If we writeg(y){\displaystyle g(y)}for the value ofE⁡(X∣Y=y){\displaystyle \operatorname {E} (X\mid Y=y)}then the random variableE⁡(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}isg(Y){\displaystyle g(Y)}. One special case states that if{Ai}{\displaystyle {\left\{A_{i}\right\}}}is a finite orcountablepartitionof thesample space, then Suppose that only two factories supplylight bulbsto the market. FactoryX{\displaystyle X}'s bulbs work for an average of 5000 hours, whereas factoryY{\displaystyle Y}'s bulbs work for an average of 4000 hours. It is known that factoryX{\displaystyle X}supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for? Applying the law of total expectation, we have: where Thus each purchased light bulb has an expected lifetime of 4600 hours. When a jointprobability density functioniswell definedand the expectations areintegrable, we write for the general caseE⁡(X)=∫xPr[X=x]dxE⁡(X∣Y=y)=∫xPr[X=x∣Y=y]dxE⁡(E⁡(X∣Y))=∫(∫xPr[X=x∣Y=y]dx)Pr[Y=y]dy=∫∫xPr[X=x,Y=y]dxdy=∫x(∫Pr[X=x,Y=y]dy)dx=∫xPr[X=x]dx=E⁡(X).{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\int x\Pr[X=x]~dx\\\operatorname {E} (X\mid Y=y)&=\int x\Pr[X=x\mid Y=y]~dx\\\operatorname {E} (\operatorname {E} (X\mid Y))&=\int \left(\int x\Pr[X=x\mid Y=y]~dx\right)\Pr[Y=y]~dy\\&=\int \int x\Pr[X=x,Y=y]~dx~dy\\&=\int x\left(\int \Pr[X=x,Y=y]~dy\right)~dx\\&=\int x\Pr[X=x]~dx\\&=\operatorname {E} (X)\,.\end{aligned}}}A similar derivation works for discrete distributions using summation instead of integration. For the specific case of a partition, give each cell of the partition a unique label and let the random variableYbe the function of the sample space that assigns a cell's label to each point in that cell. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}be a probability space on which two subσ-algebrasG1⊆G2⊆F{\displaystyle {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq {\mathcal {F}}}are defined. For a random variableX{\displaystyle X}on such a space, the smoothing law states that ifE⁡[X]{\displaystyle \operatorname {E} [X]}is defined, i.e.min(E⁡[X+],E⁡[X−])<∞{\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty }, then Proof. Since a conditional expectation is aRadon–Nikodym derivative, verifying the following two properties establishes the smoothing law: The first of these properties holds by definition of the conditional expectation. To prove the second one, so the integral∫G1XdP{\displaystyle \textstyle \int _{G_{1}}X\,d\operatorname {P} }is defined (not equal∞−∞{\displaystyle \infty -\infty }). The second property thus holds sinceG1∈G1⊆G2{\displaystyle G_{1}\in {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}}implies Corollary.In the special case whenG1={∅,Ω}{\displaystyle {\mathcal {G}}_{1}=\{\emptyset ,\Omega \}}andG2=σ(Y){\displaystyle {\mathcal {G}}_{2}=\sigma (Y)}, the smoothing law reduces to Alternative proof forE⁡[E⁡[X∣Y]]=E⁡[X].{\displaystyle \operatorname {E} [\operatorname {E} [X\mid Y]]=\operatorname {E} [X].} This is a simple consequence of the measure-theoretic definition ofconditional expectation. By definition,E⁡[X∣Y]:=E⁡[X∣σ(Y)]{\displaystyle \operatorname {E} [X\mid Y]:=\operatorname {E} [X\mid \sigma (Y)]}is aσ(Y){\displaystyle \sigma (Y)}-measurable random variable that satisfies for every measurable setA∈σ(Y){\displaystyle A\in \sigma (Y)}. TakingA=Ω{\displaystyle A=\Omega }proves the claim.
https://en.wikipedia.org/wiki/Law_of_total_expectation
Themedianof a set of numbers is the value separating the higher half from the lower half of adata sample, apopulation, or aprobability distribution. For adata set, it may be thought of as the “middle" value. The basic feature of the median in describing data compared to themean(often simply described as the "average") is that it is notskewedby a small proportion of extremely large or small values, and therefore provides a better representation of the center.Median income, for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance inrobust statistics. Median is a 2-quantile; it is the value that partitions a set into two equal parts. The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest. If the data set has an odd number of observations, the middle one is selected (after arranging in ascending order). For example, the following list of seven numbers, has the median of6, which is the fourth value. If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be thearithmetic meanof the two middle values.[1][2]For example, this data set of 8 numbers has a median value of4.5, that is(4+5)/2{\displaystyle (4+5)/2}. (In more technical terms, this interprets the median as the fullytrimmedmid-range). In general, with this convention, the median can be defined as follows: For a data setx{\displaystyle x}ofn{\displaystyle n}elements, ordered from smallest to greatest, Formally, a median of apopulationis any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median. The median is well-defined for anyordered(one-dimensional) data and is independent of anydistance metric. The median can thus be applied to school classes which are ranked but not numerical (e.g. working out a median grade when student test scores are graded from F to A), although the result might be halfway between classes if there is an even number of classes. (For odd number classes, one specific class is determined as the median.) Ageometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is themedoid. There is no widely accepted standard notation for the median, but some authors represent the median of a variablexas med(x),x͂,[3]asμ1/2,[1]or asM.[3][4]In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is a special case of otherways of summarizing the typical values associated with a statistical distribution: it is the 2ndquartile, 5thdecile, and 50thpercentile. The median can be used as a measure oflocationwhen one attaches reduced importance to extreme values, typically because a distribution isskewed, extreme values are not known, oroutliersare untrustworthy, i.e., may be measurement or transcription errors. For example, consider themultiset The median is 2 in this case, as is themode, and it might be seen as a better indication of thecenterthan thearithmetic meanof 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see§ Inequality relating means and mediansbelow.[5] As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.[6] Because the median is simple to understand and easy to calculate, while also a robust approximation to themean, the median is a popularsummary statisticindescriptive statistics. In this context, there are several choices for a measure ofvariability: therange, theinterquartile range, themean absolute deviation, and themedian absolute deviation. For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of theefficiencyof candidate estimators shows that the sample mean is more statistically efficientwhen—and only when—data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions.[citation needed]Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.[7][8] For anyreal-valuedprobability distributionwithcumulative distribution functionF, a median is defined as any real numbermthat satisfies the inequalitieslimx→m−F(x)≤12≤F(m){\displaystyle \lim _{x\to m-}F(x)\leq {\frac {1}{2}}\leq F(m)}(cf. thedrawingin thedefinition of expected value for arbitrary real-valued random variables). An equivalent phrasing uses a random variableXdistributed according toF:P⁡(X≤m)≥12andP⁡(X≥m)≥12.{\displaystyle \operatorname {P} (X\leq m)\geq {\frac {1}{2}}{\text{ and }}\operatorname {P} (X\geq m)\geq {\frac {1}{2}}\,.} Note that this definition does not requireXto have anabsolutely continuous distribution(which has aprobability density functionf), nor does it require adiscrete one. In the former case, the inequalities can be upgraded to equality: a median satisfiesP⁡(X≤m)=∫−∞mf(x)dx=12{\displaystyle \operatorname {P} (X\leq m)=\int _{-\infty }^{m}{f(x)\,dx}={\frac {1}{2}}}andP⁡(X≥m)=∫m∞f(x)dx=12.{\displaystyle \operatorname {P} (X\geq m)=\int _{m}^{\infty }{f(x)\,dx}={\frac {1}{2}}\,.} Anyprobability distributionon the real number setR{\displaystyle \mathbb {R} }has at least one median, but in pathological cases there may be more than one median: ifFis constant 1/2 on an interval (so thatf= 0 there), then any value of that interval is a median. The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as theCauchy distribution: Themean absolute errorof a real variablecwith respect to therandom variableXisE⁡[|X−c|]{\displaystyle \operatorname {E} \left[\left|X-c\right|\right]}Provided that the probability distribution ofXis such that the above expectation exists, thenmis a median ofXif and only ifmis a minimizer of the mean absolute error with respect toX.[11]In particular, ifmis a sample median, then it minimizes the arithmetic mean of the absolute deviations.[12]Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique. More generally, a median is defined as a minimum ofE⁡[|X−c|−|X|],{\displaystyle \operatorname {E} \left[\left|X-c\right|-\left|X\right|\right],}as discussed below in the section onmultivariate medians(specifically, thespatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, ink-medians clustering. If the distribution has finite variance, then the distance between the medianX~{\displaystyle {\tilde {X}}}and the meanX¯{\displaystyle {\bar {X}}}is bounded by onestandard deviation. This bound was proved by Book and Sher in 1979 for discrete samples,[13]and more generally by Page and Murty in 1982.[14]In a comment on a subsequent proof by O'Cinneide,[15]Mallows in 1991 presented a compact proof that usesJensen's inequalitytwice,[16]as follows. Using |·| for theabsolute value, we have |μ−m|=|E⁡(X−m)|≤E⁡(|X−m|)≤E⁡(|X−μ|)≤E⁡((X−μ)2)=σ.{\displaystyle {\begin{aligned}\left|\mu -m\right|=\left|\operatorname {E} (X-m)\right|&\leq \operatorname {E} \left(\left|X-m\right|\right)\\[2ex]&\leq \operatorname {E} \left(\left|X-\mu \right|\right)\\[1ex]&\leq {\sqrt {\operatorname {E} \left({\left(X-\mu \right)}^{2}\right)}}=\sigma .\end{aligned}}} The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes theabsolute deviationfunctiona↦E⁡[|X−a|]{\displaystyle a\mapsto \operatorname {E} [|X-a|]}. Mallows's proof can be generalized to obtain a multivariate version of the inequality[17]simply by replacing the absolute value with anorm:‖μ−m‖≤E⁡(‖X−μ‖2)=trace⁡(var⁡(X)){\displaystyle \left\|\mu -m\right\|\leq {\sqrt {\operatorname {E} \left({\left\|X-\mu \right\|}^{2}\right)}}={\sqrt {\operatorname {trace} \left(\operatorname {var} (X)\right)}}} wheremis aspatial median, that is, a minimizer of the functiona↦E⁡(‖X−a‖).{\displaystyle a\mapsto \operatorname {E} (\|X-a\|).\,}The spatial median is unique when the data-set's dimension is two or more.[18][19] An alternative proof uses the one-sided Chebyshev inequality; it appears inan inequality on location and scale parameters. This formula also follows directly fromCantelli's inequality.[20] For the case ofunimodaldistributions, one can achieve a sharper bound on the distance between the median and the mean:[21] |X~−X¯|≤(35)1/2σ≈0.7746σ.{\displaystyle \left|{\tilde {X}}-{\bar {X}}\right|\leq \left({\frac {3}{5}}\right)^{1/2}\sigma \approx 0.7746\sigma .} A similar relation holds between the median and the mode: |X~−mode|≤31/2σ≈1.732σ.{\displaystyle \left|{\tilde {X}}-\mathrm {mode} \right|\leq 3^{1/2}\sigma \approx 1.732\sigma .} A typical heuristic is that positively skewed distributions have mean > median. This is true for all members of thePearson distribution family. However this is not always true. For example, theWeibull distribution familyhas members with positive mean, but mean < median. Violations of the rule are particularly common for discrete distributions. For example, any Poisson distribution has positive skew, but its mean < median wheneverμmod1>ln⁡2{\displaystyle \mu {\bmod {1}}>\ln 2}.[22]See[23]for a proof sketch. When the distribution has a monotonically decreasing probability density, then the median is less than the mean, as shown in the figure. Jensen's inequality states that for any random variableXwith a finite expectationE[X] and for any convex functionf f(E⁡(x))≤E⁡(f(x)){\displaystyle f(\operatorname {E} (x))\leq \operatorname {E} (f(x))} This inequality generalizes to the median as well. We say a functionf:R→Ris aC functionif, for anyt, f−1((−∞,t])={x∈R∣f(x)≤t}{\displaystyle f^{-1}\left(\,(-\infty ,t]\,\right)=\{x\in \mathbb {R} \mid f(x)\leq t\}}is aclosed interval(allowing the degenerate cases of asingle pointor anempty set). Every convex function is a C function, but the reverse does not hold. Iffis a C function, then f(med⁡[X])≤med⁡[f(X)]{\displaystyle f(\operatorname {med} [X])\leq \operatorname {med} [f(X)]} If the medians are not unique, the statement holds for the corresponding suprema.[24] Even thoughcomparison-sortingnitems requiresΩ(nlogn)operations,selection algorithmscan compute thekth-smallest ofnitemswith onlyΘ(n)operations. This includes the median, which is the⁠n/2⁠th order statistic (or for an even number of samples, thearithmetic meanof the two middle order statistics).[25] Selection algorithms still have the downside of requiringΩ(n)memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in thequicksortsorting algorithm, which uses an estimate of its input's median. A morerobust estimatorisTukey'sninther, which is the median of three rule applied with limited recursion:[26]ifAis the sample laid out as anarray, and then Theremedianis an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.[27] The distributions of both the sample mean and the sample median were determined byLaplace.[28]The distribution of the sample median from a population with a density functionf(x){\displaystyle f(x)}is asymptotically normal with meanμ{\displaystyle \mu }and variance[29] 14nf(m)2{\displaystyle {\frac {1}{4nf(m)^{2}}}} wherem{\displaystyle m}is the median off(x){\displaystyle f(x)}andn{\displaystyle n}is the sample size: Sample median∼N(μ=m,σ2=14nf(m)2){\displaystyle {\text{Sample median}}\sim {\mathcal {N}}{\left(\mu {=}m,\,\sigma ^{2}{=}{\frac {1}{4nf(m)^{2}}}\right)}} A modern proof follows below. Laplace's result is now understood as a special case ofthe asymptotic distribution of arbitrary quantiles. For normal samples, the density isf(m)=1/2πσ2{\displaystyle f(m)=1/{\sqrt {2\pi \sigma ^{2}}}}, thus for large samples the variance of the median equals(π/2)⋅(σ2/n).{\displaystyle ({\pi }/{2})\cdot (\sigma ^{2}/n).}[7](See also section#Efficiencybelow.) We take the sample size to be an odd numberN=2n+1{\displaystyle N=2n+1}and assume our variable continuous; the formula for the case of discrete variables is given below in§ Empirical local density. The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilitiesF(v){\displaystyle F(v)},f(v){\displaystyle f(v)}and1−F(v){\displaystyle 1-F(v)}. For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the pointv{\displaystyle v}directly from the trinomial distribution: Pr[med=v]dv=(2n+1)!n!n!F(v)n(1−F(v))nf(v)dv.{\displaystyle \Pr[\operatorname {med} =v]\,dv={\frac {(2n+1)!}{n!n!}}F(v)^{n}(1-F(v))^{n}f(v)\,dv.} Now we introduce the beta function. For integer argumentsα{\displaystyle \alpha }andβ{\displaystyle \beta }, this can be expressed asB(α,β)=(α−1)!(β−1)!(α+β−1)!{\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {(\alpha -1)!(\beta -1)!}{(\alpha +\beta -1)!}}}. Also, recall thatf(v)dv=dF(v){\displaystyle f(v)\,dv=dF(v)}. Using these relationships and setting bothα{\displaystyle \alpha }andβ{\displaystyle \beta }equal ton+1{\displaystyle n+1}allows the last expression to be written as F(v)n(1−F(v))nB(n+1,n+1)dF(v){\displaystyle {\frac {F(v)^{n}(1-F(v))^{n}}{\mathrm {B} (n+1,n+1)}}\,dF(v)} Hence the density function of the median is a symmetric beta distributionpushed forwardbyF{\displaystyle F}. Its mean, as we would expect, is 0.5 and its variance is1/(4(N+2)){\displaystyle 1/(4(N+2))}. By thechain rule, the corresponding variance of the sample median is 14(N+2)f(m)2.{\displaystyle {\frac {1}{4(N+2)f(m)^{2}}}.} The additional 2 is negligiblein the limit. In practice, the functionsf{\displaystyle f}andF{\displaystyle F}above are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression forPr(med=v){\displaystyle \Pr(\operatorname {med} =v)}; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: Pr(med=v)=∑i=0n∑k=0nN!i!(N−i−k)!k!F(v−1)i(1−F(v))kf(v)N−i−k{\displaystyle \Pr(\operatorname {med} =v)=\sum _{i=0}^{n}\sum _{k=0}^{n}{\frac {N!}{i!(N-i-k)!k!}}F(v-1)^{i}(1-F(v))^{k}f(v)^{N-i-k}} Here,iis the number of points strictly less than the median andkthe number strictly greater. Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics. The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error. The value of(2f(x))−2{\displaystyle (2f(x))^{-2}}—the asymptotic value ofn−1/2(ν−m){\displaystyle n^{-1/2}(\nu -m)}whereν{\displaystyle \nu }is the population median—has been studied by several authors. The standard "delete one"jackknifemethod producesinconsistentresults.[30]An alternative—the "delete k" method—wherek{\displaystyle k}grows with the sample size has been shown to be asymptotically consistent.[31]This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent,[32]but converges very slowly (orderofn−14{\displaystyle n^{-{\frac {1}{4}}}}).[33]Other methods have been proposed but their behavior may differ between large and small samples.[34] Theefficiencyof the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of sizeN=2n+1{\displaystyle N=2n+1}from thenormal distribution, the efficiency for large N is 2πN+2N{\displaystyle {\frac {2}{\pi }}{\frac {N+2}{N}}} The efficiency tends to2π{\displaystyle {\frac {2}{\pi }}}asN{\displaystyle N}tends to infinity. In other words, the relative variance of the median will beπ/2≈1.57{\displaystyle \pi /2\approx 1.57}, or 57% greater than the variance of the mean – the relativestandard errorof the median will be(π/2)12≈1.25{\displaystyle (\pi /2)^{\frac {1}{2}}\approx 1.25}, or 25% greater than thestandard error of the mean,σ/n{\displaystyle \sigma /{\sqrt {n}}}(see also section#Sampling distributionabove.).[35] For univariate distributions that aresymmetricabout one median, theHodges–Lehmann estimatoris arobustand highlyefficient estimatorof the population median.[36] If data is represented by astatistical modelspecifying a particular family ofprobability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution.Pareto interpolationis an application of this when the population is assumed to have aPareto distribution. Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.[36][37][38][39] The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.[36][40] Thegeometric medianof a discrete set of sample pointsx1,…xN{\displaystyle x_{1},\ldots x_{N}}in a Euclidean space is the[a]point minimizing the sum of distances to the sample points. μ^=argminμ∈Rm∑n=1N‖μ−xn‖2{\displaystyle {\hat {\mu }}={\underset {\mu \in \mathbb {R} ^{m}}{\operatorname {arg\,min} }}\sum _{n=1}^{N}\left\|\mu -x_{n}\right\|_{2}} In contrast to the marginal median, the geometric median isequivariantwith respect to Euclideansimilarity transformationssuch astranslationsandrotations. If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions".[42]This concept is relevant to voting theory on account of themedian voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions). The conditional median occurs in the setting where we seek to estimate a random variableX{\displaystyle X}from a random variableY{\displaystyle Y}, which is a noisy version ofX{\displaystyle X}. The conditional median in this setting is given by m(X|Y=y)=FX|Y=y−1(12){\displaystyle m(X|Y=y)=F_{X|Y=y}^{-1}\left({\frac {1}{2}}\right)}wheret↦FX|Y=y−1(t){\displaystyle t\mapsto F_{X|Y=y}^{-1}(t)}is the inverse of the conditional cdf (i.e., conditional quantile function) ofx↦FX|Y(x|y){\displaystyle x\mapsto F_{X|Y}(x|y)}. For example, a popular model isY=X+Z{\displaystyle Y=X+Z}whereZ{\displaystyle Z}is standard normal independent ofX{\displaystyle X}. The conditional median is the optimal BayesianL1{\displaystyle L_{1}}estimator: m(X|Y=y)=arg⁡minfE⁡[|X−f(Y)|]{\displaystyle m(X|Y=y)=\arg \min _{f}\operatorname {E} \left[|X-f(Y)|\right]} It is known that for the modelY=X+Z{\displaystyle Y=X+Z}whereZ{\displaystyle Z}is standard normal independent ofX{\displaystyle X}, the estimator is linear if and only ifX{\displaystyle X}is Gaussian.[43] When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is aLikert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the medianm{\displaystyle m}is 3 since the median is the smallest value ofx{\displaystyle x}for whichF(x){\displaystyle F(x)}is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval widthw{\displaystyle w}to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the valuesf(x){\displaystyle f(x)}are known, the interpolated median can be calculated from mint=m+w[12−F(m)−12f(m)].{\displaystyle m_{\text{int}}=m+w\left[{\frac {1}{2}}-{\frac {F(m)-{\frac {1}{2}}}{f(m)}}\right].} Alternatively, if in an observed sample there arek{\displaystyle k}scores above the median category,j{\displaystyle j}scores in it andi{\displaystyle i}scores below it then the interpolated median is given by mint=m+w2[k−ij].{\displaystyle m_{\text{int}}=m+{\frac {w}{2}}\left[{\frac {k-i}{j}}\right].} For univariate distributions that aresymmetricabout one median, theHodges–Lehmann estimatoris a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the populationpseudo-median, which is the median of a symmetrized distribution and which is close to the population median.[44]The Hodges–Lehmann estimator has been generalized to multivariate distributions.[45] TheTheil–Sen estimatoris a method forrobustlinear regressionbased on finding medians ofslopes.[46] Themedian filteris an important tool ofimage processing, that can effectively remove anysalt and pepper noisefromgrayscaleimages. Incluster analysis, thek-medians clusteringalgorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used ink-means clustering, is replaced by maximising the distance between cluster-medians. This is a method of robust regression. The idea dates back toWaldin 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameterx{\displaystyle x}: a left half with values less than the median and a right half with values greater than the median.[47]He suggested taking the means of the dependenty{\displaystyle y}and independentx{\displaystyle x}variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set. Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.[48]Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.[49]Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.[50] Anymean-unbiased estimatorminimizes therisk(expected loss) with respect to the squared-errorloss function, as observed byGauss. Amedian-unbiased estimatorminimizes the risk with respect to theabsolute-deviationloss function, as observed byLaplace. Otherloss functionsare used instatistical theory, particularly inrobust statistics. The theory of median-unbiased estimators was revived by George W. Brown in 1947:[51] An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation. Further properties of median-unbiased estimators have been reported.[52][53][54][55] There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions havingmonotone likelihood-functions.[56][57]One such procedure is an analogue of theRao–Blackwell procedurefor mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class ofloss functions.[58] Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena.[59]Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.) The idea of the median appeared in the 6th century in theTalmud, in order to fairly analyze divergentappraisals.[60][61]However, the concept did not spread to the broader scientific community. Instead, the closest ancestor of the modern median is themid-range, invented byAl-Biruni[62]: 31[63]Transmission of his work to later scholars is unclear. He applied his technique toassayingcurrency metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear tocheat.[62]: 35–8[64]However, increased navigation at sea during theAge of Discoverymeant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595".[62]: 45–8 The idea of the median may have first appeared inEdward Wright's 1599 bookCertaine Errors in Navigationon a section aboutcompassnavigation.[65]Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than themid-range— was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median.[59][63][b]The median (in the context of probability) certainly appeared in the correspondence ofChristiaan Huygens, but as an example of a statistic that was inappropriate foractuarial practice.[59] The earliest recommendation of the median dates to 1757, whenRoger Joseph Boscovichdeveloped a regression method based on theL1normand therefore implicitly on the median.[59][66]In 1774,Laplacemade this desire explicit: he suggested the median be used as the standard estimator of the value of a posteriorPDF. The specific criterion was to minimize the expected magnitude of the error;|α−α∗|{\displaystyle |\alpha -\alpha ^{*}|}whereα∗{\displaystyle \alpha ^{*}}is the estimate andα{\displaystyle \alpha }is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s.[28][67]However, a decade later,GaussandLegendredeveloped theleast squaresmethod, which minimizes(α−α∗)2{\displaystyle (\alpha -\alpha ^{*})^{2}}to obtain the mean; the strong justification of this estimator by reference tomaximum likelihood estimationbased on anormal distributionmeans it has mostly replaced Laplace's original suggestion.[68] Antoine Augustin Cournotin 1843 was the first[69]to use the termmedian(valeur médiane) for the value that divides a probability distribution into two equal halves.Gustav Theodor Fechnerused the median (Centralwerth) in sociological and psychological phenomena.[70]It had earlier been used only in astronomy and related fields.Gustav Fechnerpopularized the median into the formal analysis of data, although it had been used previously by Laplace,[70]and the median appeared in a textbook byF. Y. Edgeworth.[71]Francis Galtonused the termmedianin 1881,[72][73]having earlier used the termsmiddle-most valuein 1869, and themediumin 1880.[74][75] This article incorporates material from Median of a distribution onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Median#Probability_distributions
Inprobability theory, anonlinear expectationis a nonlinear generalization of theexpectation. Nonlinear expectations are useful inutility theoryas they more closely match human behavior than traditional expectations.[1]The common use of nonlinear expectations is in assessing risks under uncertainty. Generally, nonlinear expectations are categorized into sub-linear and super-linear expectations dependent on the additive properties of the given sets. Much of the study of nonlinear expectation is attributed to work of mathematicians within the past two decades. AfunctionalE:H→R{\displaystyle \mathbb {E} :{\mathcal {H}}\to \mathbb {R} }(whereH{\displaystyle {\mathcal {H}}}is avector latticeon a given setΩ{\displaystyle \Omega }) is a nonlinear expectation if it satisfies:[2][3][4] The complete consideration of the given set, the linear space for the functions given that set, and the nonlinear expectation value is called the nonlinear expectation space. Often other properties are also desirable, for instanceconvexity,subadditivity,positive homogeneity, and translative of constants.[2]For a nonlinear expectation to be further classified as a sublinear expectation, the following two conditions must also be met: For a nonlinear expectation to instead be classified as a superlinear expectation, the subadditivity condition above is instead replaced by the condition:[5]
https://en.wikipedia.org/wiki/Nonlinear_expectation
Instatistics, apopulationis asetof similar items or events which is of interest for some question orexperiment.[1][2]A statistical population can be a group of existing objects (e.g. the set of all stars within theMilky Way galaxy) or ahypotheticaland potentiallyinfinitegroup of objects conceived as a generalization from experience (e.g. the set of all possible hands in a game of poker).[3]A population with finitely many valuesN{\displaystyle N}in thesupport[4]of the population distribution is afinite populationwith population sizeN{\displaystyle N}. A population with infinitely many values in the support is calledinfinite population. A common aim of statistical analysis is to produceinformationabout some chosen population.[5]Instatistical inference, a subset of the population (a statisticalsample) is chosen to represent the population in a statistical analysis.[6]Moreover, the statistical sample must beunbiasedandaccuratelymodel the population. The ratio of the size of this statistical sample to the size of the population is called asampling fraction. It is then possible toestimatethepopulation parametersusing the appropriatesample statistics.[7] For finite populations, sampling from the population typically removes the sampled value from the populationdue to drawing samples without replacement. This introduces a violation of the typicalindependent and identically distribution assumptionso that sampling from finite populations requires "finite population corrections" (which can be derived from thehypergeometric distribution). As a rough rule of thumb,[8]if the sampling fraction is below 10% of the population size, then finite population corrections can approximately be neglected. Thepopulation mean, or populationexpected value, is a measure of thecentral tendencyeither of aprobability distributionor of arandom variablecharacterized by that distribution.[9]In adiscrete probability distributionof a random variableX{\displaystyle X}, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible valuex{\displaystyle x}ofX{\displaystyle X}and its probabilityp(x){\displaystyle p(x)}, and then adding all these products together, givingμ=∑x⋅p(x)....{\displaystyle \mu =\sum x\cdot p(x)....}.[10][11]An analogous formula applies to the case of acontinuous probability distribution. Not every probability distribution has a defined mean (see theCauchy distributionfor an example). Moreover, the mean can be infinite for some distributions. For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual—divided by the total number of individuals. Thesample meanmay differ from the population mean, especially for small samples. Thelaw of large numbersstates that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.[12]
https://en.wikipedia.org/wiki/Population_mean
Instatistics,simple linear regression(SLR) is alinear regressionmodel with a singleexplanatory variable.[1][2][3][4][5]That is, it concerns two-dimensional sample points withone independent variable and one dependent variable(conventionally, thexandycoordinates in aCartesian coordinate system) and finds a linear function (a non-verticalstraight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjectivesimplerefers to the fact that the outcome variable is related to a single predictor. It is common to make the additional stipulation that theordinary least squares(OLS) method should be used: the accuracy of each predicted value is measured by its squaredresidual(vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to thecorrelationbetweenyandxcorrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass(x,y)of the data points. Consider themodelfunction which describes a line with slopeβandy-interceptα. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation theerrors. Suppose we observendata pairs and call them{(xi,yi),i= 1, ...,n}. We can describe the underlying relationship betweenyiandxiinvolving this error termεiby This relationship between the true (but unobserved) underlying parametersαandβand the data points is called a linear regression model. The goal is to find estimated valuesα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}for the parametersαandβwhich would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in theleast-squaresapproach: a line that minimizes thesum of squared residuals(see alsoErrors and residuals)ε^i{\displaystyle {\widehat {\varepsilon }}_{i}}(differences between actual and predicted values of the dependent variabley), each of which is given by, for any candidate parameter valuesα{\displaystyle \alpha }andβ{\displaystyle \beta }, In other words,α^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}solve the followingminimization problem: where theobjective functionQis: By expanding to get a quadratic expression inα{\displaystyle \alpha }andβ,{\displaystyle \beta ,}we can derive minimizing values of the function arguments, denotedα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}:[6] α^=y¯−(β^x¯),β^=∑i=1n(xi−x¯)(yi−y¯)∑i=1n(xi−x¯)2=∑i=1nΔxiΔyi∑i=1nΔxi2{\displaystyle {\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-({\widehat {\beta }}\,{\bar {x}}),\\[5pt]{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}={\frac {\sum _{i=1}^{n}\Delta x_{i}\Delta y_{i}}{\sum _{i=1}^{n}\Delta x_{i}^{2}}}\end{aligned}}} Here we have introduced The above equations are efficient to use if the mean of the x and y variables (x¯andy¯{\displaystyle {\bar {x}}{\text{ and }}{\bar {y}}}) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of theα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}equations. These expanded equations may be derived from the more generalpolynomial regressionequations[7][8]by defining the regression polynomial to be of order 1, as follows. [n∑i=1nxi∑i=1nxi∑i=1nxi2][α^β^]=[∑i=1nyi∑i=1nyixi]{\displaystyle {\begin{bmatrix}n&\sum _{i=1}^{n}x_{i}\\\sum _{i=1}^{n}x_{i}&\sum _{i=1}^{n}x_{i}^{2}\end{bmatrix}}{\begin{bmatrix}{\widehat {\alpha }}\\{\widehat {\beta }}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}y_{i}\\\sum _{i=1}^{n}y_{i}x_{i}\end{bmatrix}}} The abovesystem of linear equationsmay be solved directly, or stand-alone equations forα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.[9][7] α^=∑i=1nyi∑i=1nxi2−∑i=1nxi∑i=1nxiyin∑i=1nxi2−(∑i=1nxi)2β^=n∑i=1nxiyi−∑i=1nxi∑i=1nyin∑i=1nxi2−(∑i=1nxi)2{\displaystyle {\begin{aligned}&\qquad {\widehat {\alpha }}={\frac {\sum _{i=1}^{n}y_{i}\sum _{i=1}^{n}x_{i}^{2}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}x_{i}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\\\&\qquad {\widehat {\beta }}={\frac {n\sum _{i=1}^{n}x_{i}y_{i}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\&\qquad \end{aligned}}} The solution can be reformulated using elements of thecovariance matrix:β^=sx,ysx2=rxysysx{\displaystyle {\widehat {\beta }}={\frac {s_{x,y}}{s_{x}^{2}}}=r_{xy}{\frac {s_{y}}{s_{x}}}} where Substituting the above expressions forα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}into the original solution yields This shows thatrxyis the slope of the regression line of thestandardizeddata points (and that this line passes through the origin). Since−1≤rxy≤1{\displaystyle -1\leq r_{xy}\leq 1}then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known asregressions toward the mean. Generalizing thex¯{\displaystyle {\bar {x}}}notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example: This notation allows us a concise formula forrxy: Thecoefficient of determination("R squared") is equal torxy2{\displaystyle r_{xy}^{2}}when the model is linear with a single independent variable. Seesample correlation coefficientfor additional details. By multiplying all members of the summation in the numerator by :(xi−x¯)(xi−x¯)=1{\displaystyle {\begin{aligned}{\frac {(x_{i}-{\bar {x}})}{(x_{i}-{\bar {x}})}}=1\end{aligned}}}(thereby not changing it): We can see that the slope (tangent of angle) of the regression line is the weighted average of(yi−y¯)(xi−x¯){\displaystyle {\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}}that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by(xi−x¯)2{\displaystyle (x_{i}-{\bar {x}})^{2}}because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more. Givenβ^=tan⁡(θ)=dy/dx→dy=dx×β^{\displaystyle {\widehat {\beta }}=\tan(\theta )=dy/dx\rightarrow dy=dx\times {\widehat {\beta }}}withθ{\displaystyle \theta }the angle the line makes with the positive x axis, we haveyintersection=y¯−dx×β^=y¯−dy{\displaystyle y_{\rm {intersection}}={\bar {y}}-dx\times {\widehat {\beta }}={\bar {y}}-dy}[remove orclarification needed] In the above formulation, notice that eachxi{\displaystyle x_{i}}is a constant ("known upfront") value, while theyi{\displaystyle y_{i}}are random variables that depend on the linear function ofxi{\displaystyle x_{i}}and the random termεi{\displaystyle \varepsilon _{i}}. This assumption is used when deriving the standard error of the slope and showing that it isunbiased. In this framing, whenxi{\displaystyle x_{i}}is not actually arandom variable, what type of parameter does the empirical correlationrxy{\displaystyle r_{xy}}estimate? The issue is that for each value i we'll have:E(xi)=xi{\displaystyle E(x_{i})=x_{i}}andVar(xi)=0{\displaystyle Var(x_{i})=0}. A possible interpretation ofrxy{\displaystyle r_{xy}}is to imagine thatxi{\displaystyle x_{i}}defines a random variable drawn from theempirical distributionof the x values in our sample. For example, if x had 10 values from thenatural numbers: [1,2,3...,10], then we can imagine x to be aDiscrete uniform distribution. Under this interpretation allxi{\displaystyle x_{i}}have the same expectation and some positive variance. With this interpretation we can think ofrxy{\displaystyle r_{xy}}as the estimator of thePearson's correlationbetween the random variable y and the random variable x (as we just defined it). Description of the statistical properties of estimators from the simple linear regression estimates requires the use of astatistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such asinhomogeneity, but this is discussed elsewhere.[clarification needed] The estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}areunbiased. To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residualsεias random variables drawn independently from some distribution with mean zero. In other words, for each value ofx, the corresponding value ofyis generated as a mean responseα+βxplus an additional random variableεcalled theerror term, equal to zero on average. Under such interpretation, the least-squares estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}will themselves be random variables whose means will equal the "true values"αandβ. This is the definition of an unbiased estimator. Since the data in this context is defined to be (x,y) pairs for every observation, themean responseat a given value ofx, sayxd, is an estimate of the mean of theyvalues in the population at thexvalue ofxd, that isE^(y∣xd)≡y^d{\displaystyle {\hat {E}}(y\mid x_{d})\equiv {\hat {y}}_{d}\!}. The variance of the mean response is given by:[11] This expression can be simplified to wheremis the number of data points. To demonstrate this simplification, one can make use of the identity Thepredicted responsedistribution is the predicted distribution of the residuals at the given pointxd. So the variance is given by The second line follows from the fact thatCov⁡(yd,[α^+β^xd]){\displaystyle \operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)}is zero because the new prediction point is independent of the data used to fit the model. Additionally, the termVar⁡(α^+β^xd){\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)}was calculated earlier for the mean response. SinceVar⁡(yd)=σ2{\displaystyle \operatorname {Var} (y_{d})=\sigma ^{2}}(a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by The formulas given in the previous section allow one to calculate thepoint estimatesofαandβ— that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}vary from sample to sample for the specified sample size.Confidence intervalswere devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times. The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either: The latter case is justified by thecentral limit theorem. Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with meanβand varianceσ2/∑(xi−x¯)2,{\displaystyle \sigma ^{2}\left/\sum (x_{i}-{\bar {x}})^{2}\right.,}whereσ2is the variance of the error terms (seeProofs involving ordinary least squares). At the same time the sum of squared residualsQis distributed proportionally toχ2withn− 2degrees of freedom, and independently fromβ^{\displaystyle {\widehat {\beta }}}. This allows us to construct at-value where is the unbiasedstandard errorestimator of the estimatorβ^{\displaystyle {\widehat {\beta }}}. Thist-value has aStudent'st-distribution withn− 2degrees of freedom. Using it we can construct a confidence interval forβ: at confidence level(1 −γ), wheretn−2∗{\displaystyle t_{n-2}^{*}}is the(1−γ2)-th{\displaystyle \scriptstyle \left(1\;-\;{\frac {\gamma }{2}}\right){\text{-th}}}quantile of thetn−2distribution. For example, ifγ= 0.05then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficientαis given by at confidence level (1 −γ), where The confidence intervals forαandβgive us the general idea where these regression coefficients are most likely to be. For example, in theOkun's lawregression shown here the point estimates are The 95% confidence intervals for these estimates are In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[12]that at confidence level (1 −γ) the confidence band has hyperbolic form given by the equation When the model assumed the intercept is fixed and equal to 0 (α=0{\displaystyle \alpha =0}), the standard error of the slope turns into: With:ε^i=yi−y^i{\displaystyle {\hat {\varepsilon }}_{i}=y_{i}-{\hat {y}}_{i}} The alternative second assumption states that when the number of points in the dataset is "large enough", thelaw of large numbersand thecentral limit theorembecome applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantilet*n−2ofStudent'stdistribution is replaced with the quantileq*of thestandard normal distribution. Occasionally the fraction⁠1/n−2⁠is replaced with⁠1/n⁠. Whennis large such a change does not alter the results appreciably. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although theOLSarticle argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. There aren= 15 points in this data set. Hand calculations would be started by finding the following five sums: These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. The 0.975 quantile of Student'st-distribution with 13 degrees of freedom ist*13= 2.1604, and thus the 95% confidence intervals forαandβare Theproduct-moment correlation coefficientmight also be calculated: In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due toregression dilution. Other estimation methods that can be used in place of ordinary least squares includeleast absolute deviations(minimizing the sum of absolute values of residuals) and theTheil–Sen estimator(which chooses a line whoseslopeis themedianof the slopes determined by pairs of sample points). Deming regression(total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data. Line fittingis the process of constructing astraight linethat has the best fit to a series of data points. Several methods exist, considering: Sometimes it is appropriate to force the regression line to pass through the origin, becausexandyare assumed to be proportional. For the model without the intercept term,y=βx, the OLS estimator forβsimplifies to Substituting(x−h,y−k)in place of(x,y)gives the regression through(h,k): where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
https://en.wikipedia.org/wiki/Predicted_value
Inprobability theory,Wald's equation,Wald's identity[1]orWald's lemma[2]is an importantidentitythat simplifies the calculation of theexpected valueof the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean,independent and identically distributed random variablesto the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum isindependentof the summands. The equation is named after themathematicianAbraham Wald. An identity for the second moment is given by theBlackwell–Girshick equation.[3] Let(Xn)n∈N{\displaystyle \mathbb {N} }be asequenceof real-valued, independent and identically distributed random variables and letN ≥ 0be an integer-valued random variable that is independent of the sequence(Xn)n∈N{\displaystyle \mathbb {N} }. Suppose thatNand theXnhave finite expectations. Then Roll a six-sideddice. Take the number on the die (call itN) and roll that number of six-sided dice to get the numbersX1, . . . ,XN, and add up their values. By Wald's equation, the resulting value on average is Let(Xn)n∈N{\displaystyle \mathbb {N} }be an infinite sequence of real-valued random variables and letNbe a nonnegative integer-valued random variable. Assume that: Then the random sums are integrable and If, in addition, then Remark:Usually, the nameWald's equationrefers to this last equality. Clearly, assumption (1) is needed to formulate assumption (2) and Wald's equation. Assumption (2) controls the amount of dependence allowed between the sequence(Xn)n∈N{\displaystyle \mathbb {N} }and the numberNof terms; see thecounterexamplebelow for thenecessity. Note that assumption (2) is satisfied whenNis astopping timefor a sequence of independent random variables(Xn)n∈N{\displaystyle \mathbb {N} }.[citation needed]Assumption (3) is of more technical nature, implyingabsolute convergenceand thereforeallowing arbitrary rearrangementof an infinite series in the proof. If assumption (5) is satisfied, then assumption (3) can be strengthened to the simpler condition Indeed, using assumption (6), and the last series equals the expectation ofN[Proof], which is finite by assumption (5). Therefore, (5) and (6) imply assumption (3). Assume in addition to (1) and (5) that Then all the assumptions (1), (2), (5) and (6), hence also (3) are satisfied. In particular, the conditions (4) and (8) are satisfied if Note that the random variables of the sequence(Xn)n∈N{\displaystyle \mathbb {N} }don't need to be independent. The interesting point is to admit some dependence between the random numberNof terms and the sequence(Xn)n∈N{\displaystyle \mathbb {N} }. A standard version is to assume (1), (5), (8) and the existence of afiltration(Fn)n∈N{\displaystyle \mathbb {N} }0such that Then (10) implies that the event{N≥n} = {N≤n– 1}cis inFn–1, hence by (11) independent ofXn. This implies (2), and together with (8) it implies (6). For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence(Xn)n∈N{\displaystyle \mathbb {N} }and the filtration(Fn)n∈N{\displaystyle \mathbb {N} }0, the following additional assumption is often imposed: Note that (11) and (12) together imply that the random variables(Xn)n∈N{\displaystyle \mathbb {N} }are independent. An application is inactuarial sciencewhen considering the total claim amount follows acompound Poisson process within a certain time period, say one year, arising from a random numberNof individual insurance claims, whose sizes are described by the random variables(Xn)n∈N{\displaystyle \mathbb {N} }. Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions,Panjer's recursioncan be used to calculate the distribution ofSN. LetNbe an integrable,N{\displaystyle \mathbb {N} }0-valued random variable, which is independent of the integrable, real-valued random variableZwithE[Z] = 0. DefineXn= (–1)nZfor alln∈N{\displaystyle \mathbb {N} }. Then assumptions (1), (5), (7), and (8) withC:= E[|Z|]are satisfied, hence also (2) and (6), and Wald's equation applies. If the distribution ofZis not symmetric, then (9) does not hold. Note that, whenZis not almost surely equal to the zero random variable, then (11) and (12) cannot hold simultaneously for any filtration(Fn)n∈N{\displaystyle \mathbb {N} }, becauseZcannot be independent of itself asE[Z2] = (E[Z])2= 0is impossible. Let(Xn)n∈N{\displaystyle \mathbb {N} }be a sequence of independent, symmetric, and{–1, +1}-valued random variables. For everyn∈N{\displaystyle \mathbb {N} }letFnbe theσ-algebragenerated byX1, . . . ,Xnand defineN=nwhenXnis the first random variable taking the value+1. Note thatP(N=n) = 1/2n, henceE[N] < ∞by theratio test. The assumptions (1), (5) and (9), hence (4) and (8) withC= 1, (10), (11), and (12) hold, hence also (2), and (6) and Wald's equation applies. However, (7) does not hold, becauseNis defined in terms of the sequence(Xn)n∈N{\displaystyle \mathbb {N} }. Intuitively, one might expect to haveE[SN] > 0in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading. Consider a sequence(Xn)n∈N{\displaystyle \mathbb {N} }ofi.i.d.(Independent and identically distributed random variables) random variables, taking each of the two values 0 and 1 with probability⁠1/2⁠(actually, onlyX1is needed in the following). DefineN= 1 –X1. ThenSNis identically equal to zero, henceE[SN] = 0, butE[X1] =⁠1/2⁠andE[N] =⁠1/2⁠and therefore Wald's equation does not hold. Indeed, the assumptions (1), (3), (4) and (5) are satisfied, however, the equation in assumption (2) holds for alln∈N{\displaystyle \mathbb {N} }except forn= 1.[citation needed] Very similar to the second example above, let(Xn)n∈N{\displaystyle \mathbb {N} }be a sequence of independent, symmetric random variables, whereXntakes each of the values2nand–2nwith probability⁠1/2⁠. LetNbe the firstn∈N{\displaystyle \mathbb {N} }such thatXn= 2n. Then, as above,Nhas finite expectation, hence assumption (5) holds. SinceE[Xn] = 0for alln∈N{\displaystyle \mathbb {N} }, assumptions (1) and (4) hold. However, sinceSN= 1almost surely, Wald's equation cannot hold. SinceNis a stopping time with respect to the filtration generated by(Xn)n∈N{\displaystyle \mathbb {N} }, assumption (2) holds, see above. Therefore, only assumption (3) can fail, and indeed, since and thereforeP(N≥n) = 1/2n–1for everyn∈N{\displaystyle \mathbb {N} }, it follows that Assume (1), (5), (8), (10), (11) and (12). Using assumption (1), define the sequence of random variables Assumption (11) implies that the conditional expectation ofXngivenFn–1equalsE[Xn]almost surely for everyn∈N{\displaystyle \mathbb {N} }, hence(Mn)n∈N{\displaystyle \mathbb {N} }0is amartingalewith respect to the filtration(Fn)n∈N{\displaystyle \mathbb {N} }0by assumption (12). Assumptions (5), (8) and (10) make sure that we can apply theoptional stopping theorem, henceMN=SN–TNis integrable and Due to assumption (8), and due to assumption (5) this upper bound is integrable. Hence we can add the expectation ofTNto both sides of Equation (13) and obtain by linearity Remark:Note that this proof does not cover theabove example with dependent terms. This proof uses onlyLebesgue's monotoneanddominated convergence theorems. We prove the statement as given above in three steps. We first show that the random sumSNis integrable. Define the partial sums SinceNtakes its values inN{\displaystyle \mathbb {N} }0and sinceS0= 0, it follows that TheLebesgue monotone convergence theoremimplies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain where the second inequality follows using the monotone convergence theorem. By assumption (3), the infinite sequence on the right-hand side of (15) converges, henceSNis integrable. We now show that the random sumTNis integrable. Define the partial sums of real numbers. SinceNtakes its values inN{\displaystyle \mathbb {N} }0and sinceT0= 0, it follows that As in step 1, theLebesgue monotone convergence theoremimplies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain By assumption (2), Substituting this into (17) yields which is finite by assumption (3), henceTNis integrable. To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sumsSNandTNin order to show that they have the same expectation. Using thedominated convergence theoremwith dominating random variable|SN|and the definition of the partial sumSigiven in (14), it follows that Due to the absolute convergence proved in (15) above using assumption (3), we may rearrange the summation and obtain that where we used assumption (1) and the dominated convergence theorem with dominating random variable|Xn|for the second equality. Due to assumption (2) and the σ-additivity of the probability measure, Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see (15) above), using linearity of expectation and the definition of the partial sumTiof expectations given in (16), By using dominated convergence again with dominating random variable|TN|, If assumptions (4) and (5) are satisfied, then by linearity of expectation, This completes the proof.
https://en.wikipedia.org/wiki/Wald%27s_equation
Bayesian programmingis a formalism and a methodology for having a technique to specifyprobabilistic modelsand solve problems when less than the necessary information is available. Edwin T. Jaynesproposed that probability could be considered as an alternative and an extension of logic for rational reasoning with incomplete and uncertain information. In his founding bookProbability Theory: The Logic of Science[1]he developed this theory and proposed what he called “the robot,” which was not a physical device, but aninference engineto automate probabilistic reasoning—a kind ofPrologfor probability instead of logic. Bayesian programming[2]is a formal and concrete implementation of this "robot". Bayesian programming may also be seen as an algebraic formalism to specifygraphical modelssuch as, for instance,Bayesian networks,dynamic Bayesian networks,Kalman filtersorhidden Markov models. Indeed, Bayesian Programming is more general thanBayesian networksand has a power of expression equivalent to probabilisticfactor graphs.[3] A Bayesian program is a means of specifying a family of probability distributions. The constituent elements of a Bayesian program are presented below:[4] The purpose of a description is to specify an effective method of computing ajoint probability distributionon a set ofvariables{X1,X2,⋯,XN}{\displaystyle \left\{X_{1},X_{2},\cdots ,X_{N}\right\}}given a set of experimental dataδ{\displaystyle \delta }and some specificationπ{\displaystyle \pi }. Thisjoint distributionis denoted as:P(X1∧X2∧⋯∧XN∣δ∧π){\displaystyle P\left(X_{1}\wedge X_{2}\wedge \cdots \wedge X_{N}\mid \delta \wedge \pi \right)}.[5] To specify preliminary knowledgeπ{\displaystyle \pi }, the programmer must undertake the following: Given a partition of{X1,X2,…,XN}{\displaystyle \left\{X_{1},X_{2},\ldots ,X_{N}\right\}}containingK{\displaystyle K}subsets,K{\displaystyle K}variables are definedL1,⋯,LK{\displaystyle L_{1},\cdots ,L_{K}}, each corresponding to one of these subsets. Each variableLk{\displaystyle L_{k}}is obtained as the conjunction of the variables{Xk1,Xk2,⋯}{\displaystyle \left\{X_{k_{1}},X_{k_{2}},\cdots \right\}}belonging to thekth{\displaystyle k^{th}}subset. Recursive application ofBayes' theoremleads to: Conditional independencehypotheses then allow further simplifications. A conditional independence hypothesis for variableLk{\displaystyle L_{k}}is defined by choosing some variableXn{\displaystyle X_{n}}among the variables appearing in the conjunctionLk−1∧⋯∧L2∧L1{\displaystyle L_{k-1}\wedge \cdots \wedge L_{2}\wedge L_{1}}, labellingRk{\displaystyle R_{k}}as the conjunction of these chosen variables and setting: We then obtain: Such a simplification of the joint distribution as a product of simpler distributions is called a decomposition, derived using thechain rule. This ensures that each variable appears at the most once on the left of a conditioning bar, which is the necessary and sufficient condition to write mathematically valid decompositions.[citation needed] Each distributionP(Lk∣Rk∧δ∧π){\displaystyle P\left(L_{k}\mid R_{k}\wedge \delta \wedge \pi \right)}appearing in the product is then associated with either a parametric form (i.e., a functionfμ(Lk){\displaystyle f_{\mu }\left(L_{k}\right)}) or a question to another Bayesian programP(Lk∣Rk∧δ∧π)=P(L∣R∧δ^∧π^){\displaystyle P\left(L_{k}\mid R_{k}\wedge \delta \wedge \pi \right)=P\left(L\mid R\wedge {\widehat {\delta }}\wedge {\widehat {\pi }}\right)}. When it is a formfμ(Lk){\displaystyle f_{\mu }\left(L_{k}\right)}, in general,μ{\displaystyle \mu }is a vector of parameters that may depend onRk{\displaystyle R_{k}}orδ{\displaystyle \delta }or both. Learning takes place when some of these parameters are computed using the data setδ{\displaystyle \delta }. An important feature of Bayesian Programming is this capacity to use questions to other Bayesian programs as components of the definition of a new Bayesian program.P(Lk∣Rk∧δ∧π){\displaystyle P\left(L_{k}\mid R_{k}\wedge \delta \wedge \pi \right)}is obtained by some inferences done by another Bayesian program defined by the specificationsπ^{\displaystyle {\widehat {\pi }}}and the dataδ^{\displaystyle {\widehat {\delta }}}. This is similar to calling a subroutine in classical programming and provides an easy way to buildhierarchical models. Given a description (i.e.,P(X1∧X2∧⋯∧XN∣δ∧π){\displaystyle P\left(X_{1}\wedge X_{2}\wedge \cdots \wedge X_{N}\mid \delta \wedge \pi \right)}), a question is obtained by partitioning{X1,X2,⋯,XN}{\displaystyle \left\{X_{1},X_{2},\cdots ,X_{N}\right\}}into three sets: the searched variables, the known variables and the free variables. The 3 variablesSearched{\displaystyle Searched},Known{\displaystyle Known}andFree{\displaystyle Free}are defined as the conjunction of the variables belonging to these sets. A question is defined as the set of distributions: made of many "instantiated questions" as the cardinal ofKnown{\displaystyle Known}, each instantiated question being the distribution: Given the joint distributionP(X1∧X2∧⋯∧XN∣δ∧π){\displaystyle P\left(X_{1}\wedge X_{2}\wedge \cdots \wedge X_{N}\mid \delta \wedge \pi \right)}, it is always possible to compute any possible question using the following general inference: where the first equality results from the marginalization rule, the second results fromBayes' theoremand the third corresponds to a second application of marginalization. The denominator appears to be a normalization term and can be replaced by a constantZ{\displaystyle Z}. Theoretically, this allows to solve any Bayesian inference problem. In practice, however, the cost of computing exhaustively and exactlyP(Searched∣Known∧δ∧π){\displaystyle P\left({\text{Searched}}\mid {\text{Known}}\wedge \delta \wedge \pi \right)}is too great in almost all cases. Replacing the joint distribution by its decomposition we get: which is usually a much simpler expression to compute, as the dimensionality of the problem is considerably reduced by the decomposition into a product of lower dimension distributions. The purpose ofBayesian spam filteringis to eliminate junk e-mails. The problem is very easy to formulate. E-mails should be classified into one of two categories: non-spam or spam. The only available information to classify the e-mails is their content: a set of words. Using these words without taking the order into account is commonly called abag of words model. The classifier should furthermore be able to adapt to its user and to learn from experience. Starting from an initial standard setting, the classifier should modify its internal parameters when the user disagrees with its own decision. It will hence adapt to the user's criteria to differentiate between non-spam and spam. It will improve its results as it encounters increasingly classified e-mails. The variables necessary to write this program are as follows: TheseN+1{\displaystyle N+1}binary variables sum up all the information about an e-mail. Starting from the joint distribution and applying recursivelyBayes' theoremwe obtain: This is an exact mathematical expression. It can be drastically simplified by assuming that the probability of appearance of a word knowing the nature of the text (spam or not) is independent of the appearance of the other words. This is thenaive Bayesassumption and this makes this spam filter anaive Bayesmodel. For instance, the programmer can assume that: to finally obtain: This kind of assumption is known as thenaive Bayes' assumption. It is "naive" in the sense that the independence between words is clearly not completely true. For instance, it completely neglects that the appearance of pairs of words may be more significant than isolated appearances. However, the programmer may assume this hypothesis and may develop the model and the associated inferences to test how reliable and efficient it is. To be able to compute the joint distribution, the programmer must now specify theN+1{\displaystyle N+1}distributions appearing in the decomposition: whereafn{\displaystyle a_{f}^{n}}stands for the number of appearances of thenth{\displaystyle n^{th}}word in non-spam e-mails andaf{\displaystyle a_{f}}stands for the total number of non-spam e-mails. Similarly,atn{\displaystyle a_{t}^{n}}stands for the number of appearances of thenth{\displaystyle n^{th}}word in spam e-mails andat{\displaystyle a_{t}}stands for the total number of spam e-mails. TheN{\displaystyle N}formsP(Wn∣Spam){\displaystyle P(W_{n}\mid {\text{Spam}})}are not yet completely specified because the2N+2{\displaystyle 2N+2}parametersafn=0,…,N−1{\displaystyle a_{f}^{n=0,\ldots ,N-1}},atn=0,…,N−1{\displaystyle a_{t}^{n=0,\ldots ,N-1}},af{\displaystyle a_{f}}andat{\displaystyle a_{t}}have no values yet. The identification of these parameters could be done either by batch processing a series of classified e-mails or by an incremental updating of the parameters using the user's classifications of the e-mails as they arrive. Both methods could be combined: the system could start with initial standard values of these parameters issued from a generic database, then someincremental learningcustomizes the classifier to each individual user. The question asked to the program is: "what is the probability for a given text to be spam knowing which words appear and don't appear in this text?" It can be formalized by: which can be computed as follows: The denominator appears to be anormalization constant. It is not necessary to compute it to decide if we are dealing with spam. For instance, an easy trick is to compute the ratio: This computation is faster and easier because it requires only2N{\displaystyle 2N}products. The Bayesian spam filter program is completely defined by: Bayesian filters (often calledRecursive Bayesian estimation) are generic probabilistic models for time evolving processes. Numerous models are particular instances of this generic approach, for instance: theKalman filteror theHidden Markov model(HMM). The decomposition is based: The parametrical forms are not constrained and different choices lead to different well-known models: see Kalman filters and Hidden Markov models just below. The typical question for such models isP(St+k∣O0∧⋯∧Ot){\displaystyle P\left(S^{t+k}\mid O^{0}\wedge \cdots \wedge O^{t}\right)}: what is the probability distribution for the state at timet+k{\displaystyle t+k}knowing the observations from instant0{\displaystyle 0}tot{\displaystyle t}? The most common case is Bayesian filtering wherek=0{\displaystyle k=0}, which searches for the present state, knowing past observations. However, it is also possible(k>0){\displaystyle (k>0)}, to extrapolate a future state from past observations, or to do smoothing(k<0){\displaystyle (k<0)}, to recover a past state from observations made either before or after that instant. More complicated questions may also be asked as shown below in the HMM section. Bayesian filters(k=0){\displaystyle (k=0)}have a very interesting recursive property, which contributes greatly to their attractiveness.P(St|O0∧⋯∧Ot){\displaystyle P\left(S^{t}|O^{0}\wedge \cdots \wedge O^{t}\right)}may be computed simply fromP(St−1∣O0∧⋯∧Ot−1){\displaystyle P\left(S^{t-1}\mid O^{0}\wedge \cdots \wedge O^{t-1}\right)}with the following formula: Another interesting point of view for this equation is to consider that there are two phases: a prediction phase and an estimation phase: The very well-knownKalman filters[6]are a special case of Bayesian filters. They are defined by the following Bayesian program: With these hypotheses and by using the recursive formula, it is possible to solve the inference problem analytically to answer the usualP(ST∣O0∧⋯∧OT∧π){\displaystyle P(S^{T}\mid O^{0}\wedge \cdots \wedge O^{T}\wedge \pi )}question. This leads to an extremely efficient algorithm, which explains the popularity of Kalman filters and the number of their everyday applications. When there are no obvious linear transition and observation models, it is still often possible, using a first-order Taylor's expansion, to treat these models as locally linear. This generalization is commonly called theextended Kalman filter. Hidden Markov models(HMMs) are another very popular specialization of Bayesian filters. They are defined by the following Bayesian program: both specified using probability matrices. What is the most probable series of states that leads to the present state, knowing the past observations? This particular question may be answered with a specific and very efficient algorithm called theViterbi algorithm. TheBaum–Welch algorithmhas been developed for HMMs. Since 2000, Bayesian programming has been used to develop bothroboticsapplications and life sciences models.[7] In robotics, bayesian programming was applied toautonomous robotics,[8][9][10][11][12]roboticCADsystems,[13]advanced driver-assistance systems,[14]robotic armcontrol,mobile robotics,[15][16]human-robot interaction,[17]human-vehicle interaction (Bayesian autonomous driver models)[18][19][20][21][22]video gameavatar programming and training[23]and real-time strategy games (AI).[24] In life sciences, bayesian programming was used in vision to reconstruct shape from motion,[25]to model visuo-vestibular interaction[26]and to studysaccadiceye movements;[27]in speech perception and control to study earlyspeech acquisition[28]and the emergence of articulatory-acoustic systems;[29]and to model handwriting perception and control.[30] Bayesian program learning has potential applicationsvoice recognitionand synthesis,image recognitionand natural language processing. It employs the principles ofcompositionality(building abstract representations from parts),causality(building complexity from parts) andlearning to learn(using previously recognized concepts to ease the creation of new concepts).[31] The comparison between probabilistic approaches (not only bayesian programming) and possibility theories continues to be debated. Possibility theories like, for instance,fuzzy sets,[32]fuzzy logic[33]andpossibility theory[34]are alternatives to probability to model uncertainty. They argue that probability is insufficient or inconvenient to model certain aspects of incomplete/uncertain knowledge. The defense of probability is mainly based onCox's theorem, which starts from four postulates concerning rational reasoning in the presence of uncertainty. It demonstrates that the only mathematical framework that satisfies these postulates is probability theory. The argument is that any approach other than probability necessarily infringes one of these postulates and the value of that infringement. The purpose ofprobabilistic programmingis to unify the scope of classical programming languages with probabilistic modeling (especiallybayesian networks) to deal with uncertainty while profiting from the programming languages' expressiveness to encode complexity. Extended classical programming languages include logical languages as proposed inProbabilistic Horn Abduction,[35]Independent Choice Logic,[36]PRISM,[37]and ProbLog which proposes an extension of Prolog. It can also be extensions offunctional programming languages(essentiallyLispandScheme) such as IBAL or CHURCH. The underlying programming languages can be object-oriented as in BLOG and FACTORIE or more standard ones as in CES and FIGARO.[38] The purpose of Bayesian programming is different. Jaynes' precept of "probability as logic" argues that probability is an extension of and an alternative to logic above which a complete theory of rationality, computation and programming can be rebuilt.[1]Bayesian programming attempts to replace classical languages with a programming approach based on probability that considersincompletenessanduncertainty. The precise comparison between thesemanticsand power of expression of Bayesian and probabilistic programming is an open question.
https://en.wikipedia.org/wiki/Bayesian_programming
In probability theory and statisticsChow–Liu treeis an efficient method for constructing a second-orderproduct approximation of ajoint probability distribution, first described in a paper byChow & Liu (1968). The goals of such a decomposition, as with suchBayesian networksin general, may be eitherdata compressionorinference. The Chow–Liu method describes ajoint probability distributionP(X1,X2,…,Xn){\displaystyle P(X_{1},X_{2},\ldots ,X_{n})}as a product of second-order conditional and marginal distributions. For example, the six-dimensional distributionP(X1,X2,X3,X4,X5,X6){\displaystyle P(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})}might be approximated as where each new term in the product introduces just one new variable, and the product can be represented as a first-order dependency tree, as shown in the figure. The Chow–Liu algorithm (below) determines which conditional probabilities are to be used in the product approximation.[1]In general, unless there are no third-order or higher-order interactions, the Chow–Liu approximation is indeed anapproximation, and cannot capture the complete structure of the original distribution.Pearl (1988)provides a modern analysis of the Chow–Liu tree as aBayesian network. Chow and Liu show how to select second-order terms for the product approximation so that, among all such second-order approximations (first-order dependency trees), the constructed approximationP′{\displaystyle P^{\prime }}has the minimumKullback–Leibler divergenceto the actual distributionP{\displaystyle P}, and is thus theclosestapproximation in the classicalinformation-theoreticsense. The Kullback–Leibler divergence between a second-order product approximation and the actual distribution is shown to be whereI(Xi;Xj(i)){\displaystyle I(X_{i};X_{j(i)})}is themutual informationbetween variableXi{\displaystyle X_{i}}and its parentXj(i){\displaystyle X_{j(i)}}andH(X1,X2,…,Xn){\displaystyle H(X_{1},X_{2},\ldots ,X_{n})}is thejoint entropyof variable set{X1,X2,…,Xn}{\displaystyle \{X_{1},X_{2},\ldots ,X_{n}\}}. Since the terms∑H(Xi){\displaystyle \sum H(X_{i})}andH(X1,X2,…,Xn){\displaystyle H(X_{1},X_{2},\ldots ,X_{n})}are independent of the dependency ordering in the tree, only the sum of the pairwisemutual informations,∑I(Xi;Xj(i)){\displaystyle \sum I(X_{i};X_{j(i)})}, determines the quality of the approximation. Thus, if every branch (edge) on the tree is given a weight corresponding to the mutual information between the variables at its vertices, then the tree which provides the optimal second-order approximation to the target distribution is just themaximum-weight tree. The equation above also highlights the role of the dependencies in the approximation: When no dependencies exist, and the first term in the equation is absent, we have only an approximation based on first-order marginals, and the distance between the approximation and the true distribution is due to the redundancies that are not accounted for when the variables are treated as independent. As we specify second-order dependencies, we begin to capture some of that structure and reduce the distance between the two distributions. Chow and Liu provide a simple algorithm for constructing the optimal tree; at each stage of the procedure the algorithm simply adds the maximummutual informationpair to the tree. See the original paper,Chow & Liu (1968), for full details. A more efficient tree construction algorithm for the common case of sparse data was outlined inMeilă (1999). Chow and Wagner proved in a later paperChow & Wagner (1973)that the learning of the Chow–Liu tree is consistent given samples (or observations) drawn i.i.d. from a tree-structured distribution. In other words, the probability of learning an incorrect tree decays to zero as the number of samples tends to infinity. The main idea in the proof is the continuity of the mutual information in the pairwise marginal distribution. More recently, the exponential rate of convergence of the error probability was provided.[2] The obvious problem which occurs when the actual distribution is not in fact a second-order dependency tree can still in some cases be addressed by fusing or aggregating together densely connected subsets of variables to obtain a "large-node" Chow–Liu tree (Huang, King & Lyu 2002), or by extending the idea of greedy maximum branch weight selection to non-tree (multiple parent) structures (Williamson 2000). (Similar techniques of variable substitution and construction are common in theBayes networkliterature, e.g., for dealing with loops. SeePearl (1988).) Generalizations of the Chow–Liu tree are the so-calledt-cherry junction trees. It is proved that the t-cherry junction trees provide a better or at least as good approximation for a discrete multivariate probability distribution as the Chow–Liu tree gives. For the third order t-cherry junction tree see (Kovács & Szántai 2010), for thekth-order t-cherry junction tree see (Szántai & Kovács 2010). The second order t-cherry junction tree is in fact the Chow–Liu tree.
https://en.wikipedia.org/wiki/Chow%E2%80%93Liu_tree
Inprobability theory,conditional probabilityis a measure of theprobabilityof aneventoccurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred.[1]This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest isAand the eventBis known or assumed to have occurred, "the conditional probability ofAgivenB", or "the probability ofAunder the conditionB", is usually written asP(A|B)[2]or occasionallyPB(A). This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred):P(A∣B)=P(A∩B)P(B){\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}}.[3] For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone sick is coughing might be 75%, in which case we would have thatP(Cough)= 5% andP(Cough|Sick)= 75 %. Although there is a relationship betweenAandBin this example, such a relationship or dependence betweenAandBis not necessary, nor do they have to occur simultaneously. P(A|B)may or may not be equal toP(A), i.e., theunconditional probabilityorabsolute probabilityofA. IfP(A|B) = P(A), then eventsAandBare said to beindependent: in such a case, knowledge about either event does not alter the likelihood of each other.P(A|B)(the conditional probability ofAgivenB) typically differs fromP(B|A). For example, if a person hasdengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if eventB(having dengue) has occurred, the probability ofA(tested as positive) given thatBoccurred is 90%, simply writingP(A|B)= 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to highfalse positiverates. In this case, the probability of the eventB(having dengue) given that the eventA(testing positive) has occurred is 15% orP(B|A)= 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen throughbase rate fallacies. While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability usingBayes' theorem:P(A∣B)=P(B∣A)P(A)P(B){\displaystyle P(A\mid B)={{P(B\mid A)P(A)} \over {P(B)}}}.[4]Another option is to display conditional probabilities in aconditional probability tableto illuminate the relationship between events. Given twoeventsAandBfrom thesigma-fieldof a probability space, with theunconditional probabilityofBbeing greater than zero (i.e.,P(B) > 0), the conditional probability ofAgivenB(P(A∣B){\displaystyle P(A\mid B)}) is the probability ofAoccurring ifBhas or is assumed to have happened.[5]Ais assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by thequotientof the probability of the joint intersection of eventsAandB, that is,P(A∩B){\displaystyle P(A\cap B)}, the probability at whichAandBoccur together, and theprobabilityofB:[2][6][7] For a sample space consisting of equal likelihood outcomes, the probability of the eventAis understood as the fraction of the number of outcomes inAto the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the setA∩B{\displaystyle A\cap B}to the setB. Note that the above equation is a definition, not just a theoretical result. We denote the quantityP(A∩B)P(B){\displaystyle {\frac {P(A\cap B)}{P(B)}}}asP(A∣B){\displaystyle P(A\mid B)}and call it the "conditional probability ofAgivenB." Some authors, such asde Finetti, prefer to introduce conditional probability as anaxiom of probability: This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability ofBoccurring multiplied by the probability ofAoccurring, provided thatBhas occurred, is equal to the probability of theAandBoccurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under majorprobability interpretations, such as thesubjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability ofA∩B{\displaystyle A\cap B}and introduces a symmetry with the summation axiom for Poincaré Formula: Conditional probability can be defined as the probability of a conditional eventAB{\displaystyle A_{B}}. TheGoodman–Nguyen–Van Fraassenconditional event can be defined as: It can be shown that which meets the Kolmogorov definition of conditional probability.[9] IfP(B)=0{\displaystyle P(B)=0}, then according to the definition,P(A∣B){\displaystyle P(A\mid B)}isundefined. The case of greatest interest is that of a random variableY, conditioned on a continuous random variableXresulting in a particular outcomex. The eventB={X=x}{\displaystyle B=\{X=x\}}has probability zero and, as such, cannot be conditioned on. Instead of conditioning onXbeingexactlyx, we could condition on it being closer than distanceϵ{\displaystyle \epsilon }away fromx. The eventB={x−ϵ<X<x+ϵ}{\displaystyle B=\{x-\epsilon <X<x+\epsilon \}}will generally have nonzero probability and hence, can be conditioned on. We can then take thelimit For example, if two continuous random variablesXandYhave a joint densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}, then byL'Hôpital's ruleandLeibniz integral rule, upon differentiation with respect toϵ{\displaystyle \epsilon }: The resulting limit is theconditional probability distributionofYgivenXand exists when the denominator, the probability densityfX(x0){\displaystyle f_{X}(x_{0})}, is strictly positive. It is tempting todefinethe undefined probabilityP(A∣X=x){\displaystyle P(A\mid X=x)}using limit (1), but this cannot be done in a consistent manner. In particular, it is possible to find random variablesXandWand valuesx,wsuch that the events{X=x}{\displaystyle \{X=x\}}and{W=w}{\displaystyle \{W=w\}}are identical but the resulting limits are not: TheBorel–Kolmogorov paradoxdemonstrates this with a geometrical argument. LetXbe a discrete random variable and its possible outcomes denotedV. For example, ifXrepresents the value of a rolled dice thenVis the set{1,2,3,4,5,6}{\displaystyle \{1,2,3,4,5,6\}}. Let us assume for the sake of presentation thatXis a discrete random variable, so that each value inVhas a nonzero probability. For a valuexinVand an eventA, the conditional probability is given byP(A∣X=x){\displaystyle P(A\mid X=x)}. Writing for short, we see that it is a function of two variables,xandA. For a fixedA, we can form the random variableY=c(X,A){\displaystyle Y=c(X,A)}. It represents an outcome ofP(A∣X=x){\displaystyle P(A\mid X=x)}whenever a valuexofXis observed. The conditional probability ofAgivenXcan thus be treated as a random variableYwith outcomes in the interval[0,1]{\displaystyle [0,1]}. From thelaw of total probability, its expected value is equal to the unconditionalprobabilityofA. The partial conditional probabilityP(A∣B1≡b1,…,Bm≡bm){\displaystyle P(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m})}is about the probability of eventA{\displaystyle A}given that each of the condition eventsBi{\displaystyle B_{i}}has occurred to a degreebi{\displaystyle b_{i}}(degree of belief, degree of experience) that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate lengthn{\displaystyle n}.[10]Suchn{\displaystyle n}-bounded partial conditional probability can be defined as theconditionally expectedaverage occurrence of eventA{\displaystyle A}in testbeds of lengthn{\displaystyle n}that adhere to all of the probability specificationsBi≡bi{\displaystyle B_{i}\equiv b_{i}}, i.e.: Based on that, partial conditional probability can be defined as wherebin∈N{\displaystyle b_{i}n\in \mathbb {N} }[10] Jeffrey conditionalization[11][12]is a special case of partial conditional probability, in which the condition events must form apartition: Suppose that somebody secretly rolls two fair six-sideddice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5. Probability thatD1= 2 Table 1 shows thesample spaceof 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells beingD1+D2. D1= 2 in exactly 6 of the 36 outcomes; thusP(D1= 2) =6⁄36=1⁄6: Probability thatD1+D2≤ 5 Table 2 shows thatD1+D2≤ 5 for exactly 10 of the 36 outcomes, thusP(D1+D2≤ 5) =10⁄36: Probability thatD1= 2given thatD1+D2≤ 5 Table 3 shows that for 3 of these 10 outcomes,D1= 2. Thus, the conditional probability P(D1= 2 |D1+D2≤ 5) =3⁄10= 0.3: Here, in the earlier notation for the definition of conditional probability, the conditioning eventBis thatD1+D2≤ 5, and the eventAisD1= 2. We haveP(A∣B)=P(A∩B)P(B)=3/3610/36=310,{\displaystyle P(A\mid B)={\tfrac {P(A\cap B)}{P(B)}}={\tfrac {3/36}{10/36}}={\tfrac {3}{10}},}as seen in the table. Instatistical inference, the conditional probability is an update of the probability of aneventbased on new information.[13]The new information can be incorporated as follows:[1] This approach results in a probability measure that is consistent with the original probability measure and satisfies all theKolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability ofAwith respect toXwill be preserved with respect toB(cf.a Formal Derivationbelow). The wording "evidence" or "information" is generally used in theBayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is,P(A) is the probability ofAbefore accounting for evidenceE, andP(A|E) is the probability ofAafter having accounted for evidenceEor after having updatedP(A). This is consistent with the frequentist interpretation, which is the first definition given above. WhenMorse codeis transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by:P(dot sent|dot received)=P(dot received|dot sent)P(dot sent)P(dot received).{\displaystyle P({\text{dot sent }}|{\text{ dot received}})=P({\text{dot received }}|{\text{ dot sent}}){\frac {P({\text{dot sent}})}{P({\text{dot received}})}}.}In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probability of a "dot" and "dash" areP(dot sent)=37andP(dash sent)=47{\displaystyle P({\text{dot sent}})={\frac {3}{7}}\ and\ P({\text{dash sent}})={\frac {4}{7}}}. If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculateP(dot received){\displaystyle P({\text{dot received}})}. P(dot received)=P(dot received∩dot sent)+P(dot received∩dash sent){\displaystyle P({\text{dot received}})=P({\text{dot received }}\cap {\text{ dot sent}})+P({\text{dot received }}\cap {\text{ dash sent}})} P(dot received)=P(dot received∣dot sent)P(dot sent)+P(dot received∣dash sent)P(dash sent){\displaystyle P({\text{dot received}})=P({\text{dot received }}\mid {\text{ dot sent}})P({\text{dot sent}})+P({\text{dot received }}\mid {\text{ dash sent}})P({\text{dash sent}})} P(dot received)=910×37+110×47=3170{\displaystyle P({\text{dot received}})={\frac {9}{10}}\times {\frac {3}{7}}+{\frac {1}{10}}\times {\frac {4}{7}}={\frac {31}{70}}} Now,P(dot sent∣dot received){\displaystyle P({\text{dot sent }}\mid {\text{ dot received}})}can be calculated: P(dot sent∣dot received)=P(dot received∣dot sent)P(dot sent)P(dot received)=910×373170=2731{\displaystyle P({\text{dot sent }}\mid {\text{ dot received}})=P({\text{dot received }}\mid {\text{ dot sent}}){\frac {P({\text{dot sent}})}{P({\text{dot received}})}}={\frac {9}{10}}\times {\frac {\frac {3}{7}}{\frac {31}{70}}}={\frac {27}{31}}}[14] EventsAandBare defined to bestatistically independentif the probability of the intersection of A and B is equal to the product of the probabilities of A and B: IfP(B) is not zero, then this is equivalent to the statement that Similarly, ifP(A) is not zero, then is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical inAandB. Independence does not refer to a disjoint event.[15] It should also be noted that given the independent event pair [A B] and an event C, the pair is defined to beconditionally independentif the product holds true:[16] P(AB∣C)=P(A∣C)P(B∣C){\displaystyle P(AB\mid C)=P(A\mid C)P(B\mid C)} This theorem could be useful in applications where multiple independent events are being observed. Independent events vs. mutually exclusive events The concepts of mutually independent events andmutually exclusive eventsare separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero). In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur). In general, it cannot be assumed thatP(A|B) ≈P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.[17]The relationship betweenP(A|B) andP(B|A) is given byBayes' theorem: That is,P(A|B) ≈P(B|A) only ifP(B)/P(A) ≈ 1, or equivalently,P(A) ≈P(B). In general, it cannot be assumed thatP(A) ≈P(A|B). These probabilities are linked through thelaw of total probability: where the events(Bn){\displaystyle (B_{n})}form a countablepartitionofΩ{\displaystyle \Omega }. This fallacy may arise throughselection bias.[18]For example, in the context of a medical claim, letSCbe the event that asequela(chronic disease)Soccurs as a consequence of circumstance (acute condition)C. LetHbe the event that an individual seeks medical help. Suppose that in most cases,Cdoes not causeS(so thatP(SC) is low). Suppose also that medical attention is only sought ifShas occurred due toC. From experience of patients, a doctor may therefore erroneously conclude thatP(SC) is high. The actual probability observed by the doctor isP(SC|H). Not taking prior probability into account partially or completely is calledbase rate neglect. The reverse, insufficient adjustment from the prior probability isconservatism. Formally,P(A|B) is defined as the probability ofAaccording to a new probability function on the sample space, such that outcomes not inBhave probability 0 and that it is consistent with all originalprobability measures.[19][20] Let Ω be a discretesample spacewithelementary events{ω}, and letPbe the probability measure with respect to theσ-algebraof Ω. Suppose we are told that the eventB⊆ Ω has occurred. A newprobability distribution(denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not inBwill have null probability in the new distribution. For events inB, two conditions must be met: the probability ofBis one and the relative magnitudes of the probabilities must be preserved. The former is required by theaxioms of probability, and the latter stems from the fact that the new probability measure has to be the analog ofPin which the probability ofBis one - and every event that is not inB, therefore, has a null probability. Hence, for some scale factorα, the new distribution must satisfy: Substituting 1 and 2 into 3 to selectα: So the newprobability distributionis Now for a general eventA,
https://en.wikipedia.org/wiki/Conditional_probability