text
stringlengths
21
172k
source
stringlengths
32
113
Inmathematics, theMarkov brothers' inequalityis aninequality,provedin the 1890s by brothersAndrey MarkovandVladimir Markov, two Russian mathematicians. This inequalityboundsthe maximum of thederivativesof apolynomialon anintervalin terms of the maximum of the polynomial.[1]Fork= 1 it was proved by Andrey Markov,[2]and fork= 2, 3, ... by his brother Vladimir Markov.[3] LetPbe a polynomial ofdegree≤n. Then for all nonnegativeintegersk{\displaystyle k} This inequality is tight, as equality is attained forChebyshev polynomialsof the first kind. Markov's inequality is used to obtain lower bounds incomputational complexity theoryvia the so-called "polynomial method".[4]
https://en.wikipedia.org/wiki/Markov_brothers%27_inequality
Infunctional programming, anapplicative functor, or an applicative for short, is an intermediate structure betweenfunctorsandmonads. Incategory theorythey are calledclosed monoidal functors. Applicative functors allow for functorial computations to be sequenced (unlike plain functors), but don't allow using results from prior computations in the definition of subsequent ones (unlike monads). Applicative functors are the programming equivalent oflax monoidal functorswithtensorial strengthin category theory. Applicative functors were introduced in 2008 by Conor McBride and Ross Paterson in their paperApplicative programming with effects.[1] Applicative functors first appeared as a library feature inHaskell, but have since spread to other languages such asIdris,Agda,OCaml,Scala, andF#. Glasgow Haskell, Idris, and F# offer language features designed to ease programming with applicative functors. In Haskell, applicative functors are implemented in theApplicativetype class. While in languages like Haskell monads are applicative functors, this is not always the case in general settings of category theory - examples of monads which arenotstrong can be found onMath Overflow. In Haskell, an applicative is aparameterized typethat can be thought of as being a container for data of the parameter type with two additional methods:pureand<*>. Thepuremethod for an applicative of parameterized typefhas type and can be thought of as bringing values into the applicative. The<*>method for an applicative of typefhas type and can be thought of as the equivalent of function application inside the applicative.[2] Alternatively, instead of providing<*>, one may provide a function calledliftA2. These two functions may be defined in terms of each other; therefore only one is needed for a minimally complete definition.[3] Applicatives are also required to satisfy four equational laws:[3] Every applicative is a functor. To be explicit, given the methodspureand<*>,fmapcan be implemented as[3] The commonly-used notationg<$>xis equivalent topureg<*>x. In Haskell, theMaybe typecan be made an instance of the type classApplicativeusing the following definition:[2] As stated in the Definition section,pureturns anainto aMaybea, and<*>applies a Maybe function to a Maybe value. Using the Maybe applicative for typeaallows one to operate on values of typeawith the error being handled automatically by the applicative machinery. For example, to addm::MaybeIntandn::MaybeInt, one needs only write For the non-error case, addingm=Justiandn=JustjgivesJust(i+j). If either ofmornisNothing, then the result will beNothingalso. This example also demonstrates how applicatives allow a sort of generalized function application.
https://en.wikipedia.org/wiki/Applicative_functor
The following outline is provided as an overview of and topical guide to human–computer interaction: Human–Computer Interaction (HCI)– the intersection of computer science and behavioral sciences — this field involves the study, planning, and design of the interaction between people (users) and computers. Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is theThree Mile Island accidentwhere investigations concluded that the design of the human-machine interface was at least partially responsible for the disaster. Human–Computer Interaction can be described as all of the following: Human–computer interaction draws from the following fields: History of human–computer interaction Hardwareinput/outputdevices andperipherals: Motion pictures featuring interesting user interfaces: Industrial labs and companies known for innovation and research in HCI:
https://en.wikipedia.org/wiki/Outline_of_human%E2%80%93computer_interaction
Inlogic,mathematics,computer science, andlinguistics, aformal languageis a set ofstringswhose symbols are taken from a set called "alphabet". The alphabet of a formal language consists of symbols that concatenate into strings (also called "words").[1]Words that belong to a particular formal language are sometimes calledwell-formed words. A formal language is often defined by means of aformal grammarsuch as aregular grammarorcontext-free grammar. In computer science, formal languages are used, among others, as the basis for defining the grammar ofprogramming languagesand formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings orsemantics. Incomputational complexity theory,decision problemsare typically defined as formal languages, andcomplexity classesare defined as the sets of the formal languages that can beparsed by machineswith limited computational power. Inlogicand thefoundations of mathematics, formal languages are used to represent the syntax ofaxiomatic systems, andmathematical formalismis the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field offormal language theorystudies primarily the purelysyntacticaspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities ofnatural languages. In the 17th century,Gottfried Leibnizimagined and described thecharacteristica universalis, a universal and formal language which utilisedpictographs. Later,Carl Friedrich Gaussinvestigated the problem ofGauss codes.[2] Gottlob Fregeattempted to realize Leibniz's ideas, through a notational system first outlined inBegriffsschrift(1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903).[3]This described a "formal language of pure language."[4] In the first half of the 20th century, several developments were made with relevance to formal languages.Axel Thuepublished four papers relating to words and language between 1906 and 1914. The last of these introduced whatEmil Postlater termed 'Thue Systems', and gave an early example of anundecidable problem.[5]Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble",[6]and later devised thecanonical systemfor the creation of formal languages. In 1907,Leonardo Torres Quevedointroduced a formal language for the description of mechanical drawings (mechanical devices), inVienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines").[7]Heinz Zemanekrated it as an equivalent to aprogramming languagefor the numerical control of machine tools.[8] Noam Chomskydevised an abstract representation of formal and natural languages, known as theChomsky hierarchy.[9]In 1959John Backusdeveloped the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation ofFORTRAN.[10]Peter Naurwas the secretary/editor for the ALGOL60 Report in which he usedBackus–Naur formto describe the Formal part of ALGOL60. Analphabet, in the context of formal languages, can be anyset; its elements are calledletters. An alphabet may contain aninfinitenumber of elements;[note 1]however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use analphabetin the usual sense of the word, or more generally any finitecharacter encodingsuch asASCIIorUnicode. Awordover an alphabet can be any finite sequence (i.e.,string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ*(using theKleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, theempty word, which is often denoted by e, ε, λ or even Λ. Byconcatenationone can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word. In some applications, especially inlogic, the alphabet is also known as thevocabularyand words are known asformulasorsentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor. Given a non-empty setΣ{\displaystyle \Sigma }, aformal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }is asubsetofΣ∗{\displaystyle \Sigma ^{*}}, which is the set ofall possible finite-length words overΣ{\displaystyle \Sigma }. We call the setΣ{\displaystyle \Sigma }the alphabet ofL{\displaystyle L}. On the other hand, given a formal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }, a wordw∈Σ∗{\displaystyle w\in \Sigma ^{*}}iswell-formedifw∈L{\displaystyle w\in L}. Similarly, an expressionE⊆Σ∗{\displaystyle E\subseteq \Sigma ^{*}}iswell-formedifE⊆L{\displaystyle E\subseteq L}. Sometimes, a formal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }has a set of clear rules and constraints for the creation of all possible well-formed words fromΣ∗{\displaystyle \Sigma ^{*}}. In computer science and mathematics, which do not usually deal withnatural languages, the adjective "formal" is often omitted as redundant. On the other hand, we can just say "a formal languageL{\displaystyle L}" when its alphabetΣ{\displaystyle \Sigma }is clear in the context. While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such asregular languagesorcontext-free languages. The notion of aformal grammarmay be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it. The following rules describe a formal languageLover the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}: Under these rules, the string "23+4=555" is inL, but the string "=234=+" is not. This formal language expressesnatural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (theirsyntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc. For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a languageLas justL= {a, b, ab, cba}. Thedegeneratecase of this construction is theempty language, which contains no words at all (L=∅). However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writingL= {a, b, ab, cba}. Here are some examples of formal languages: Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as Typical questions asked about such formalisms include: Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area ofcomputability theoryandcomplexity theory. Formal languages may be classified in theChomsky hierarchybased on the expressive power of their generative grammar as well as the complexity of their recognizingautomaton.Context-free grammarsandregular grammarsprovide a good compromise between expressivity and ease ofparsing, and are widely used in practical applications. Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations. Examples: supposeL1{\displaystyle L_{1}}andL2{\displaystyle L_{2}}are languages over some common alphabetΣ{\displaystyle \Sigma }. Suchstring operationsare used to investigateclosure propertiesof classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, thecontext-free languagesare known to be closed under union, concatenation, and intersection withregular languages, but not closed under intersection or complement. The theory oftriosandabstract families of languagesstudies the most common closure properties of language families in their own right.[11] A compiler usually has two distinct components. Alexical analyzer, sometimes generated by a tool likelex, identifies the tokens of the programming language grammar, e.g.identifiersorkeywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means ofregular expressions. At the most basic conceptual level, aparser, sometimes generated by aparser generatorlikeyacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built. Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically anabstract syntax tree. This is used by subsequent stages of the compiler to eventually generate anexecutablecontainingmachine codethat runs directly on the hardware, or someintermediate codethat requires avirtual machineto execute. Inmathematical logic, aformal theoryis a set ofsentencesexpressed in a formal language. Aformal system(also called alogical calculus, or alogical system) consists of a formal language together with adeductive apparatus(also called adeductive system). The deductive apparatus may consist of a set oftransformation rules, which may be interpreted as valid rules of inference, or a set ofaxioms, or have both. A formal system is used toderiveone expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systemsFS{\displaystyle {\mathcal {FS}}}andFS′{\displaystyle {\mathcal {FS'}}}may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance). Aformal prooforderivationis a finite sequence of well-formed formulas (which may be interpreted as sentences, orpropositions) each of which is an axiom or follows from the preceding formulas in the sequence by arule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions. Formal languages are entirely syntactic in nature, but may be givensemanticsthat give meaning to the elements of the language. For instance, in mathematicallogic, the set of possible formulas of a particular logic is a formal language, and aninterpretationassigns a meaning to each of the formulas—usually, atruth value. The study of interpretations of formal languages is calledformal semantics. In mathematical logic, this is often done in terms ofmodel theory. In model theory, the terms that occur in a formula are interpreted as objects withinmathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; amodelfor a formula is an interpretation of terms such that the formula becomes true.
https://en.wikipedia.org/wiki/Formal_language
Engineering notationorengineering form(alsotechnical notation) is a version ofscientific notationin which the exponent of ten is always selected to be divisible by three to match the common metric prefixes, i.e. scientific notation that aligns with powers of a thousand, for example, 531×103instead of 5.31×105(but on calculator displays written without the ×10 to save space). As an alternative to writing powers of 10,SI prefixescan be used,[1]which also usually provide steps of a factor of a thousand.[nb 1]On most calculators, engineering notation is called "ENG" mode as scientific notation is denoted SCI. An early implementation of engineering notation in the form of range selection and number display with SI prefixes was introduced in the computerized HP 5360Afrequency counterbyHewlett-Packardin 1969.[1] Based on an idea by Peter D. Dickinson[2][1]the firstcalculatorto support engineering notation displaying the power-of-ten exponent values was theHP-25in 1975.[3]It was implemented as a dedicated display mode in addition to scientific notation. In 1975,Commodoreintroduced a number of scientific calculators (like theSR4148/SR4148R[4]andSR4190R[5]) providing avariable scientific notation, where pressing theEE↓andEE↑keys shifted the exponent and decimal point by ±1[nb 2]inscientificnotation. Between 1976 and 1980 the sameexponent shiftfacility was also available on someTexas Instrumentscalculators of the pre-LCDera such as earlySR-40,[6][7]TI-30[8][9][10][11][12][13][14][15]andTI-45[16][17]model variants utilizing (INV)EE↓instead. This can be seen as a precursor to a feature implemented on manyCasiocalculators since 1978/1979 (e.g. in theFX-501P/FX-502P), where number display inengineeringnotation is available on demand by the single press of a (INV)ENGbutton (instead of having to activate a dedicated display mode as on most other calculators), and subsequent button presses would shift the exponent and decimal point of the number displayed by ±3[nb 2]in order to easily let results match a desired prefix. Some graphical calculators (for example thefx-9860G) in the 2000s also support the display of some SI prefixes (f, p, n, μ, m, k, M, G, T, P, E) as suffixes in engineering mode. Compared to normalized scientific notation, one disadvantage of using SI prefixes and engineering notation is thatsignificant figuresare not always readily apparent when the smallest significant digit or digits are 0. For example, 500 μm and500×10−6mcannot express theuncertaintydistinctions between5×10−4m,5.0×10−4m, and5.00×10−4m. This can be solved by changing the range of the coefficient in front of the power from the common 1–1000 to 0.001–1.0. In some cases this may be suitable; in others it may be impractical. In the previous example, 0.5 mm, 0.50 mm, or 0.500 mm would have been used to show uncertainty and significant figures. It is also common to state the precision explicitly, such as "47 kΩ±5%" Another example: when thespeed of light(exactly299792458m/s[18]by the definition of the meter) is expressed as3.00×108m/sor3.00×105km/sthen it is clear that it is between299500km/sand300500km/s, but when using300×106m/s, or300×103km/s,300000km/s, or the unusual but short300 Mm/s, this is not clear. A possibility is using0.300×109m/sor0.300 Gm/s. On the other hand, engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometers" (10−9beingnano) and written as 12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight meters". Engineering notation, like scientific notation generally, can use theE notation, such that3.0×10−9can be written as 3.0E−9 or 3.0e−9. The E (or e) should not be confused with theEuler's number eor the symbol for theexa-prefix. Just like decimal engineering notation can be viewed as a base-1000 scientific notation (103= 1000),binaryengineering notation relates to a base-1024 scientific notation (210= 1024), where the exponent of two must be divisible by ten. This is closely related to the base-2floating-pointrepresentation (B notation) commonly used in computer arithmetic, and the usage of IECbinary prefixes, e.g. 1B10 for 1 × 210, 1B20 for 1 × 220, 1B30 for 1 × 230, 1B40 for 1 × 240etc.[19]
https://en.wikipedia.org/wiki/Engineering_notation
The Society for Mathematical Biology (SMB)is an international association co-founded in 1972 in the United States byGeorge Karreman,Herbert Daniel Landahland (initially chaired) byAnthony Bartholomayfor the furtherance of joint scientific activities betweenMathematicsandBiologyresearch communities.[1][2]The society publishes theBulletin of Mathematical Biology,[3][4]as well as the quarterly SMB newsletter.[5] The Society for Mathematical Biology emerged and grew from the earlier school ofmathematical biophysics, initiated and supported by the Founder ofMathematical Biology,Nicolas Rashevsky.[6][7]Thus, the roots of SMB go back to the publication in 1939 of the first international journal of mathematical biology, previously entitled "The Bulletin of Mathematical Biophysics"—which was founded by Nicolas Rashevsky, and which is currently published by SMB under the name of "Bulletin of Mathematical Biology".[8]Professor Rashevsky also founded in 1969 the non-profit organization "Mathematical Biology, Incorporated"—the precursor of SMB. Another notable member of theUniversity of Chicagoschool of mathematical biology wasAnatol Rapoportwhose major interests were in developing basic concepts in the related area ofmathematical sociology, who cofounded theSociety for General Systems Researchand became a president of the latter society in 1965. Herbert D. Landahl was initially also a member of Rashevsky's school of mathematical biology, and became the second president of SMB in the 1980s; both Herbert Landahl andRobert Rosenfrom Rashevsky's research group were focused on dynamical systems approaches tocomplex systems biology, with the latter researcher becoming in 1980 the president of theSociety for General Systems Research. The Society for Mathematical Biology is governed by its Officers and Board of Directors, elected by the membership. Current SMB President isJane Heffernan(York University), and Past-President serving as vice president isHeiko Enderling(Moffitt Cancer Center). SMB secretary is Jon Forde (Hobart and William Smith Colleges), and treasurer is Stanca Ciupe (Virginia Tech). The current Board of Directors is composed ofRuth Baker(University of Oxford), Padmini Rangamani (University of California San Diego), Amina Eladdadi (The College of St Rose), Peter Kim (The University of Sydney), Robyn Araujo (Queensland University of Technology), and Amber Smith (University of Tennessee Health Science Center). In addition to its research and news publications, the society supports education in:mathematical biology, mathematical biophysics,complex systems biologyandtheoretical biologythrough sponsorship of several topic-focused graduate and postdoctoral courses. To encourage and stimulate young researchers in this relatively new and rapidly developing field of mathematical biology, the society awards several prizes, as well as lists regularly new open international opportunities for researchers and students in this field.[9] The society publishes theBulletin of Mathematical Biology. The Bulletin was founded by Dr.Nicolas Rashevsky, who is generally recognized as the founder of the first organized group in mathematical biology in the world. The journal was originally published as the Bulletin of Mathematical Biophysics, and quickly became the classical journal in general mathematical biology and served as the principal natural publication outlet for the majority of mathematical biologists. Many classical papers have appeared in the Bulletin, and several of these are familiar to biologists. It has become an important avenue for the exchange and transmission of new ideas and approaches to biological problems and incorporates both the quantitative and qualitative aspects of mathematical models and characterizations of biological processes and systems.[10] Dr. Rashevsky remained the editor of the Bulletin until his death on January 16, 1972. University of Michigan Reinhard Laubenbacher University of Connecticut Health
https://en.wikipedia.org/wiki/Society_for_Mathematical_Biology
InUnixoperating systems, the termwheelrefers to auser accountwith awheel bit, a system setting that provides additional specialsystem privilegesthat empower a user to execute restrictedcommandsthat ordinary user accounts cannot access.[1][2] The termwheelwas first applied to computer user privilege levels after the introduction of theTENEXoperating system, later distributed under the nameTOPS-20in the 1960s and early 1970s.[2][3]The term was derived from the slang phrasebig wheel, referring to a person with great power or influence.[1] In the 1980s, the term was imported intoUnixculture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix.[2] Modern Unix systems generally useuser groupsas asecurityprotocol to control access privileges. Thewheelgroup is a special user group used on some Unix systems, mostlyBSDsystems,[citation needed]to control access to thesu[4][5]orsudocommand, which allows a user to masquerade as another user (usually thesuper user).[1][2][6]Debian and its derivatives create a group calledsudowith purpose similar to that of awheelgroup.[7] The phrasewheel war, which originated atStanford University,[8]is a term used incomputer culture, first documented in the 1983 version ofThe Jargon File. A 'wheel war' was a user conflict in amulti-user(see also:multiseat) computer system, in which students withadministrative privilegeswould attempt to lock each other out of a university's computer system, sometimes causing unintentional harm to other users.[9]
https://en.wikipedia.org/wiki/Wheel_(computing)
Incomputingastorage violationis a hardware or softwarefaultthat occurs when ataskattempts to access an area ofcomputer storagewhich it is not permitted to access. Storage violation can, for instance, consist of reading from, writing to, or freeing storage not owned by the task. A common type of storage violation is known as astack buffer overflowwhere a program attempts to exceed the limits set for itscall stack. It can also refer to attempted modification of memory "owned" by another thread where there is incomplete (or no) memory protection. Storage violations can occur in transaction systems such asCICSin circumstances where it is possible to write to storage not owned by the transaction; such violations can be reduced by enabling features such asstorage protectionandtransaction isolation. Storage violations can be difficult to detect as a program can often run for a period of time after the violation before it crashes. For example, a pointer to a freed area of memory can be retained and later reused causing an error. As a result, efforts focus on detecting violations as they occur, rather than later when the problem is observed. In systems such as CICS, storage violations are sometimes detected (by the CICSkernel) by the use of "signatures", which can be tested to see if they have been overlaid. An alternative runtime library may be used to better detect storage violations, at the cost of additional overhead.[1]Some programming languages use softwarebounds checkingto prevent these occurrences. Some programdebuggingsoftware will also detect violations during testing. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Storage_violation
Steganographic file systemsare a kind offile systemfirst proposed byRoss Anderson,Roger Needham, andAdi Shamir. Their paper proposed two main methods of hiding data: in a series of fixed size files originally consisting of random bits on top of which 'vectors' could be superimposed in such a way as to allow levels of security to decrypt all lower levels but not even know of the existence of any higher levels, or an entire partition is filled with random bits and files hidden in it. In a steganographic file system using the second scheme,filesare not merely stored, nor storedencrypted, but the entirepartitionis randomized - encrypted files strongly resemble randomized sections of the partition, and so when files are stored on the partition, there is no easy way to discern between meaninglessgibberishand the actual encrypted files. Furthermore, locations of files are derived from the key for the files, and the locations are hidden and available to only programs with the passphrase. This leads to the problem that very quickly files can overwrite each other (because of theBirthday Paradox); this is compensated for by writing all files in multiple places to lessen the chance of data loss. While there may seem to be no point to a file system which is guaranteed to either be grossly inefficient storage space-wise or to cause data loss and corruption either from data collisions or loss of thekey(in addition to being a complex system, and for having poor read/write performance), performance was not the goal of StegFS. Rather, StegFS is intended to thwart"rubberhose attacks", which usually work because encrypted files are distinguishable from regular files, and authorities can coerce the user until the user gives up the keys and all the files are distinguishable as regular files. However, since in a steganographic file system, the number of files are unknown and every byte looks like an encrypted byte, the authorities cannot know how many files (and hence, keys) are stored. The user hasplausible deniability— he can say there are only a few innocuous files or none at all, and anybody without the keys cannot gainsay the user. Poul-Henning Kamphas criticized thethreat modelfor steganographic file systems in his paper onGBDE,[1]observing that in certain coercive situations, especially where the searched-for information is in fact not stored in the steganographic file systems, it is not possible for a subject to "get off the hook" by proving that all keys have been surrendered. Other methods exist; the method laid out before is the one implemented byStegFS, but it is possible tosteganographicallyhide data within image (e.g.PNGDrive) or audio files-ScramDiskor the Linuxloop devicecan do this.[citation needed] Generally, a steganographic file system is implemented over a steganographic layer, which supplies just the storage mechanism. For example, the steganographic file system layer can be some existing MP3 files, each file contains a chunk of data (or a part of the file system). The final product is a file system that is hardly detected (depending on the steganographic layer) that can store any kind of file in a regular file system hierarchy. TrueCryptallows for "hidden volumes" - two or more passwords open different volumes in the same file, but only one of the volumes contains secret data.
https://en.wikipedia.org/wiki/Steganographic_file_system
DjVu[a]is acomputerfile formatdesigned primarily to storescanned documents, especially those containing a combination of text, line drawings,indexed color images, and photographs. It uses technologies such as image layer separation of text and background/images,progressive loading,arithmetic coding, andlossy compressionforbitonal(monochrome) images. This allows high-quality, readable images to be stored in a minimum of space, so that they can be made available on theweb. DjVu has been promoted as providing smaller files thanPDFfor most scanned documents.[3]The DjVu developers report that color magazine pages compress to 40–70 kB, black-and-white technical papers compress to 15–40 kB, and ancient manuscripts compress to around 100 kB; a satisfactoryJPEGimage typically requires 500 kB.[4]Like PDF, DjVu can contain anOCRtext layer, making it easy to performcopy and pasteand text search operations. The DjVu technology was originally developed byYann LeCun,Léon Bottou,Patrick Haffner,Paul G. Howard,Patrice Simard, andYoshua BengioatAT&T Labsfrom 1996 to 2001.[4] Prior to the standardization ofPDFin 2008,[5][6]DjVu was considered superior because it is anopen file format,[citation needed]in contrast to theproprietarynature of PDF at the time. The declared higher compression ratio (and thus smaller file size) and the claimed ease of converting large volumes of text into DjVu format were other arguments for DjVu's superiority over PDF in 2004. Independent technologistBrewster Kahlein a 2004 talk on IT Conversations discussed the benefits of allowing easier access to DjVu files.[7][8] The DjVu library distributed as part of the open-source packageDjVuLibrehas become thereference implementationfor the DjVu format. DjVuLibre has been maintained and updated by the original developers of DjVu since 2002.[9] The DjVu file format specification has gone through a number of revisions, the most recent being from 2005. The primary usage of the DjVu format has been the electronic distribution of documents with a quality comparable to that of printed documents. As that niche is also the primary usage for PDF, it was inevitable that the two formats would become competitors. It should however be observed that the two formats approach the problem of delivering high resolution documents in very different ways: PDF primarily encodes graphics and text asvectoriseddata, whereas DjVu primarily encodes them aspixmapimages. This means PDF places the burden ofrenderingthe document on the reader, whereas DjVu places that burden on the creator. During a number of years, significantly overlapping with the period when DjVu was being developed, there were no PDF viewers for free operating systems—a particular stumbling block was the rendering of vectorised fonts, which are essential for combining small file size with high resolution in PDF. Since displaying DjVu was a simpler problem for which free software was available, there were suggestions that thefree software movementshould employ DjVu instead of PDF for distributing documentation; rendering for creating DjVu is in principle not much different from rendering for a device-specific printer driver, and DjVu can as a last resort be generated from scans of paper media. However, whenFreeType2.0 in 2000 began to provide rendering of all major vectorised font formats, that specific advantage of DjVu began to erode. In the 2000s, with the growth of theWorld Wide Weband before widespread adoption ofbroadband, DjVu was often adopted bydigital librariesas their format of choice, thanks to its integration with software likeGreenstone[10]and theInternet Archive,[11]browser plugins which allowed advanced online browsing, smaller file size for comparable quality of book scans and other image-heavy documents[12]and support for embedding and searching full text fromOCR.[13][14]Some features such as the thumbnail previews were later integrated in the Internet Archive's BookReader[15]and DjVu browsing was deprecated in its favour as around 2015 some major browsers stopped supportingNPAPIand DjVu plugins with them.[16] DjVu.js Viewerattempts to replace the missing browser plugins.[citation needed] The DjVu file format is based on theInterchange File Formatand is composed of hierarchically organized chunks. The IFF structure is preceded by a 4-byteAT&Tmagic number. Following is a singleFORMchunk with a secondary identifier of eitherDJVUorDJVMfor a single-page or a multi-page document, respectively. All the chunks can be contained in a single file in the case of the so called bundled documents, or can be contained in several files: one file for every page plus some files with shared chunks. DjVu divides a single image into many different images, then compresses them separately. To create a DjVu file, the initial image is first separated into three images: a background image, a foreground image, and a mask image. The background and foreground images are typically lower-resolution color images (e.g., 100 dpi); the mask image is a high-resolution bilevel image (e.g., 300 dpi) and is typically where the text is stored. The background and foreground images are then compressed using awavelet-based compressionalgorithm named IW44.[4]The mask image is compressed using a method called JB2 (similar toJBIG2). The JB2 encoding method identifies nearly identical shapes on the page, such as multiple occurrences of a particular character in a given font, style, and size. It compresses the bitmap of each unique shape separately, and then encodes the locations where each shape appears on the page. Thus, instead of compressing a letter "e" in a given font multiple times, it compresses the letter "e" once (as a compressed bit image) and then records every place on the page it occurs. Optionally, these shapes may be mapped toUTF-8codes (either by hand or potentially by atext recognitionsystem) and stored in the DjVu file. If this mapping exists, it is possible to select and copy text. Since JB2 (also called DjVuBitonal) is a variation on JBIG2, working on the same principles,[17]both compression methods have the same problems when performing lossy compression. In 2013 it emerged that Xerox photocopiers and scanners had been substituting digits for similar looking ones, for example replacing a 6 with an 8.[18]A DjVu document has been spotted in the wild with character substitutions, such as an n with bleeding serifs turning into a u and an o with a spot inside turning into an e.[19]Whether lossy compression has occurred is not stored in the file.[1]Thus the DjView viewing application can't warn the user thatglyphsubstitutions might have occurred, neither when opening a lossy compressed file, nor in the Information or Metadata dialogue boxes.[20] DjVu is anopen file formatwith patents.[3]The file format specification is published, as well as source code for the reference library.[3]The original authors distribute anopen-sourceimplementation named "DjVuLibre" under theGNU General Public Licenseand a patent grant.[21]The rights to the commercial development of the encoding software have been transferred to different companies over the years, includingAT&T Corporation,LizardTech,[22]Celartem[23]andePapyrus Solutions K.K.(formerlyCuminas[24]before joining ePapyrus Solutions, Inc.[25]).[26]Patents typically have an expiry term of about 20 years. Celartem acquired LizardTech andExtensis.[27][28][23][29][30] The selection of downloadable DjVu viewers is wider onLinux distributionsthan it is on Windows or macOS. Additionally, the format is rarely supported by proprietary scanning software. Free creators, manipulators, converters, web browser plug-ins, and desktop viewers are available.[2]DjVu is supported by a number of multi-format document viewers and e-book reader software on Linux (Okular,Evince,Zathura), Windows (Okular,SumatraPDF), and Android (Document Viewer,[31]FBReader, EBookDroid, PocketBook). In 2002, the DjVu file format was chosen by theInternet Archiveas a format in which itsMillion Book Projectprovides scannedpublic-domainbooks online (along withTIFFand PDF).[32]In February 2016, the Internet Archive announced that DjVu would no longer be used for new uploads, among other reasons citing the format's declining use and the difficulty of maintaining theirJava appletbased viewer for the format.[16] Wikimedia Commons, a media repository used byWikipediaamong others, conditionally permits PDF and DjVu media files.[33]
https://en.wikipedia.org/wiki/DjVu
Inlinguistics,bindingis the phenomenon in whichanaphoricelements such aspronounsare grammatically associated with theirantecedents.[citation needed]For instance in the English sentence "Mary saw herself", theanaphor"herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding has been a major area of research insyntaxandsemanticssince the 1970s and, as the name implies, is a core component ofgovernment and binding theory.[1] The following sentences illustrate some basic facts of binding. The words that bear the index i should be construed as referring to the same person or thing.[2] These sentences illustrate some aspects of the distribution ofreflexiveandpersonalpronouns. In the first pair of sentences, the reflexive pronoun must appear for the indicated reading to be possible. In the second pair, the personal pronoun must appear for the indicated reading to be possible. The third pair shows that at times a personal pronoun must follow its antecedent, and the fourth pair further illustrates the same point, although the acceptability judgement is not as robust. Based on such data, one sees that reflexive and personal pronouns differ in their distribution and that linear order (of a pronoun in relation to its antecedent or postcedent) is a factor influencing where at least some pronouns can appear. A theory of binding should be capable of predicting and explaining the differences in distribution seen in sentences like these. It should be able to answer questions like: What explains where a reflexive pronoun must appear as opposed to a personal pronoun? When does linear order play a role in determining where pronouns can appear? What other factor (or factors) beyond linear order help predict where pronouns can appear? The following three subsections consider the binding domains that are relevant for the distribution of pronouns and nouns in English. The discussion follows the outline provided by the traditional binding theory (see below), which divides nominals into three basic categories: reflexive and reciprocal pronouns, personal pronouns, and nouns (commonandproper).[3] When one examines the distribution ofreflexive pronounsandreciprocalpronouns (which are often subsumed under the general category of "anaphor"), one sees that there are certain domains that are relevant, a "domain" being a syntactic unit that isclause-like. Reflexive and reciprocal pronouns often seek their antecedent close by, in a binding domain that is local, e.g. These examples illustrate that there is a domain within which a reflexive or reciprocal pronoun should find its antecedent. The a-sentences are fine because the reflexive or reciprocal pronoun has its antecedent within the clause. The b-sentences, in contrast, do not allow the indicated reading, a fact illustrating that personal pronouns have a distribution that is different from that of reflexive and reciprocal pronouns. A related observation is that a reflexive and reciprocal pronoun often cannot seek its antecedent in a superordinate clause, e.g. When the reflexive or reciprocal pronoun attempts to find an antecedent outside of the immediate clause containing it, it fails. In other words, it can hardly seek its antecedent in the superordinate clause. The binding domain that is relevant is the immediate clause containing it. Personal pronouns have a distribution that is different from reflexive and reciprocal pronouns, a point that is evident with the first two b-sentences in the previous section. The local binding domain that is decisive for the distribution of reflexive and reciprocal pronouns is also decisive for personal pronouns, but in a different way. Personal pronouns seek their antecedent outside of the local binding domain containing them, e.g. In these cases, the pronoun has to look outside of the embedded clause containing it to the matrix clause to find its antecedent. Hence based on such data, the relevant binding domain appears to be the clause. Further data illustrate, however, that the clause is actually not the relevant domain: Since the pronouns appear within the same minimal clause containing their antecedents in these cases, one cannot argue that the relevant binding domain is the clause. The most one can say based on such data is that the domain is "clause-like". The distribution of common and proper nouns is unlike that of reflexive, reciprocal, and personal pronouns. The relevant observation in this regard is that a noun is often reluctantly coreferential with another nominal that is within its binding domain or in a superordinate binding domain, e.g. The readings indicated in the a-sentences are natural, whereas the b-sentences are very unusual. Indeed, sentences like these b-sentences were judged to be impossible in the traditional binding theory according to Condition C (see below). Given a contrastive context, however, the b-sentences can work, e.g.Susan does not admire Jane, but rather Susaniadmires Susani. One can therefore conclude that nouns are not sensitive to binding domains in the same way that reflexive, reciprocal, and personal pronouns are. The following subsections illustrate the extent to which pure linear order impacts the distribution of pronouns. While linear order is clearly important, it is not the only factor influencing where pronouns can appear. A simple hypothesis concerning the distribution of many anaphoric elements, of personal pronouns in particular, is that linear order plays a role. In most cases, a pronoun follows its antecedent, and in many cases, the coreferential reading is impossible if the pronoun precedes its antecedent. The following sentences suggest that pure linear can indeed be important for the distribution of pronouns: While the coreferential readings indicated in these b-sentences are possible, they are unlikely. The order presented in the a-sentences is strongly preferred. The following, more extensive data sets further illustrate that linear order is important: While the acceptability judgements here are nuanced, one can make a strong case that pure linear order is at least in part predictive of when the indicated reading is available. The a- and c-sentences allow the coreferential reading more easily than their b- and d-counterparts. While linear order is an important factor influencing the distribution of pronouns, it is not the only factor. The following sentences are similar to the c- and d-sentences in the previous section insofar as an embedded clause is present. While there may be a mild preference for the order in the a-sentences here, the indicated reading in the b-sentences is also available. Hence linear order is hardly playing a role in such cases. The relevant difference between these sentences and the c- and d-sentences in the previous section is that the embedded clauses here areadjunctclauses, whereas they areargumentclauses above. The following examples involve adjunct phrases:[4] The fact that the c-sentences marginally allow the indicated reading whereas the b-sentences do not at all allow this reading further demonstrates that linear order is important. But in this regard, the d-sentences are telling, since if linear order were the entire story, one would expect the d-sentences to be less acceptable than they are. The conclusion that one can draw from such data is that there are one or more other factors beyond linear order that are impacting the distribution of pronouns. Given that linear order is not the only factor influencing the distribution of pronouns, the question is what other factor or factors might also be playing a role. The traditional binding theory (see below) tookc-commandto be the all important factor, but the importance of c-command for syntactic theorizing has been extensively criticized in recent years.[5]The primary alternative to c-command is functional rank. These two competing concepts (c-command vs. rank) have been debated extensively and they continue to be debated. C-command is a configurational notion; it is defined over concrete syntactic configurations. Syntactic rank, in contrast, is a functional notion that resides in the lexicon; it is defined over the ranking of the arguments ofpredicates. Subjects are ranked higher than objects, first objects are ranked higher than second objects, and prepositional objects are ranked lowest. The following two subsections briefly consider these competing notions. C-command is a configurational notion that acknowledges the syntactic configuration as primitive. Basicsubject-objectasymmetries, which are numerous in many languages, are explained by the fact that the subject appears outside of the finite verb phrase (VP) constituent, whereas the object appears inside it. Subjects therefore c-command objects, but not vice versa. C-command is defined as follows: Given the binary division of the clause (S → NP + VP) associated with mostphrase structure grammars, this definition sees a typical subject c-commanding everything inside theverb phrase(VP), whereas everything inside the VP is incapable of c-commanding anything outside of the VP. Some basic binding facts are explained in this manner, e.g. Sentence a is fine because the subjectLarryc-commands the objecthimself, whereas sentence b does not work because the objectLarrydoes not c-command the subjecthimself. The assumption has been that within its binding domain, a reflexive pronoun must be c-commanded by its antecedent. While this approach based on c-command makes a correct prediction much of the time, there are other cases where it fails to make the correct prediction, e.g. The reading indicated is acceptable in this case, but if c-command were the key notion helping to explain where the reflexive can and must appear, then the reading should be impossible sincehimselfis not c-commanded byLarry.[7] As reflexive and personal pronouns occur in complementary distribution, the notion of c-command can also be used to explain where personal pronouns can appear. The assumption is that personal pronounscannotc-command their antecedent, e.g. In both examples, the personal pronounshedoes not c-command its antecedentAlice, resulting in the grammaticality of both sentences despite reversed linear order. The alternative to a c-command approach posits a ranking of syntactic functions (SUBJECT > FIRST OBJECT > SECOND OBJECT > PREPOSITIONAL OBJECT).[8]Subject-object asymmetries are addressed in terms of this ranking. Since subjects are ranked higher than objects, an object can have the subject as its antecedent, but not vice versa. With basic cases, this approach makes the same prediction as the c-command approach. The first two sentences from the previous section are repeated here: Since the subject outranks the object, sentence a is predictably acceptable, the subjectLarryoutranking the objecthimself. Sentence b, in contrast, is bad because the subject reflexive pronounhimselfoutranks its postcedentLarry. In other words, this approach in terms of rank is assuming that within its binding domain, a reflexive pronoun may not outrank its antecedent (or postcedent). Consider the third example sentence from the previous section in this regard: The approach based on rank does not require a particular configurational relationship to hold between a reflexive pronoun and its antecedent. In other words, it makes no prediction in this case, and hence does not make an incorrect prediction. The reflexive pronounhimselfis embedded within the subject noun phrase, which means that it is not the subject and hence does not outrank the objectLarry. A theory of binding that acknowledges both linear order and rank can at least begin to predict many of the marginal readings.[9]When both linear order and rank combine, acceptability judgments are robust, e.g. This ability to address marginal readings is something that an approach combining linear order and rank can accomplish, whereas an approach that acknowledges only c-command cannot do the same. The exploration of binding phenomena got started in the 1970s and interest peaked in the 1980s withGovernment and Binding Theory, a grammar framework in the tradition ofgenerative syntaxthat is still prominent today.[10]The theory of binding that became widespread at that time serves now merely as reference point (since it is no longer believed to be correct[why?]). This theory distinguishes between 3 different binding conditions: A, B, and C. The theory classifies nominals according to two features, [±anaphor] and [±pronominal], which are binary. The binding characteristics of a nominal are determined by the values of these features, either plus or minus. Thus, a nominal that is [-anaphor, -pronominal] is an R-expression (referring expression), such as acommon nounor aproper name. A nominal that is [-anaphor, +pronominal] is a pronoun, such asheorthey, and a nominal that is [+anaphor, -pronominal] is a reflexive pronoun, such ashimselforthemselves.[clarification needed]Note that the termanaphorhere is being used in a specialized sense; it essentially means "reflexive". This meaning is specific to the Government and Binding framework and has not spread beyond this framework.[11] Based on the classifications according to these two features, three conditions are formulated: While the theory of binding that these three conditions represent is no longer held to be valid[why?], as mentioned above, the associations with the three conditions are so firmly anchored in the study of binding that one often refers to, for example, "Condition A effects" or "Condition B effects" when describing binding phenomena.
https://en.wikipedia.org/wiki/Binding_(linguistics)
ANT(originates from Adaptive Network Topology) is a proprietary (butopen access)multicastwireless sensor networktechnology designed and marketed by ANT Wireless (a division ofGarminCanada).[1]It providespersonal area networks(PANs), primarily foractivity trackers. ANT was introduced by Dynastream Innovations in 2003, followed by the low-power standard ANT+ in 2004, before Dynastream was bought by Garmin in 2006.[2] ANT defines a wireless communicationsprotocol stackthat enables hardware operating in the 2.4GHzISM bandto communicate by establishing standard rules for co-existence, data representation, signalling,authentication, anderror detection.[3]It is conceptually similar toBluetooth low energy, but is oriented towards use with sensors. As of November 2020,[update]the ANT website lists almost 200 brands using ANT technology.[4]Samsungand, to a lesser part,Fujitsu,HTC,Kyocera,NokiaandSharpadded native support (without the use of a USB adapter) to their smartphones, with Samsung starting support with theGalaxy S4and ending support with theGalaxy S20line.[5][6][7] ANT-powerednodesare capable of acting as sources or sinks within awireless sensor networkconcurrently. This means the nodes can act as transmitters, receivers, or transceivers to route traffic to other nodes. In addition, every node is capable of determining when to transmit based on the activity of its neighbors.[3] ANT can be configured to spend long periods in a low-power sleep mode (drawing current on the order of microamperes), wake up briefly to communicate (when current rises to a peak of 22 milliamperes (at −5dB) during reception and 13.5 milliamperes (at −5 dB) during transmission)[8]and return to sleep mode. Average current draw for low message rates is less than 60 microamperes on the nRF24AP1 chip.[8]The newer nRF24AP2 has improved on these figures.[9] ANT is considered a network/transport layer protocol. The underlying link layer protocol is Shockburst,[10]which is used in many otherNordic Semiconductor"NRF" chips such as those used withArduino.[11]ANT uses Shockburst at 1 Mbsp with GFSK modulation, translating to a 1 MHzbandwidth, resulting in 126 available radio channels over the ISM band.[10] ANT channels are separate from the underlying Shockburst RF channels. They are identified simply by a channel number built into the packet,[12]and on the nRF24AP2 78 channels can be used.[9]Each ANT channel consists of one or more transmitting nodes and one or more receiving nodes, depending on the network topology. Any node can transmit or receive, so the channels are bi-directional.[13]Newer versions of ANT can back one ANT channel with several RF channels throughfrequency agility.[12] The underlying RF channel is onlyhalf-duplex, meaning only one node can transmit at a time. The underlying radio chip can also only choose to transmit or receive at any given moment.[11]As a result, the ANF channel is controlled by aTime Division Multiple Accessscheme. A "master" node controls the timing, while the "slave" nodes use the master node's transmission to determine when they can transmit.[9] ANT accommodates three types of messaging: broadcast, acknowledged, and burst. ANT was designed for low-bit-rate and low-power sensor networks, in a manner conceptually similar to (but not compatible with)Bluetooth Low Energy.[3]This is in contrast with normalBluetooth, which was designed for relatively high-bit-rate applications such as streaming sound for low-power headsets. ANT uses adaptive isochronous transmission[14]to allow many ANT devices to communicate concurrently without interference from one another, unlike Bluetooth LE, which supports an unlimited number of nodes throughscatternetsand broadcasting between devices. Burst – 20 kbit/s[16]Advanced Burst – 60 kbit/s[16] Bluetooth,Wi-Fi, andZigbeeemploydirect-sequence spread spectrum(DSSS) andFrequency-hopping spread spectrum(FHSS) schemes respectively to maintain theintegrityof the wireless link.[18] ANT uses an adaptiveisochronousnetwork technology to ensure coexistence with other ANT devices. This scheme provides the ability for each transmission to occur in an interference-free time slot within the defined frequency band. The radio transmits for less than 150 μs per message, allowing a single channel to be divided into hundreds of time slots. The ANT messaging period (the time between each node transmitting its data) determines how many time slots are available.[citation needed] ANT+, introduced in 2004 as "the first ultra low power wireless standard",[2]is an interoperability function that can be added to the base ANT protocol. This standardization allows the networking of nearby ANT+ devices to facilitate the open collection and interpretation of sensor data. For example, ANT+ enabled fitness monitoring devices such as heart-rate monitors, pedometers, speed monitors, and weight scales can all work together to assemble and track performance metrics.[19] ANT+ is designed and maintained by the ANT+ Alliance, which is managed by ANT Wireless, a division of Dynastream Innovations, owned byGarmin.[20]ANT+ is used in Garmin's line of fitness monitoring equipment. It is also used by Garmin's Chirp, ageocachingdevice, for logging and alerting nearby participants.[21] ANT+ devices require certification from the ANT+ Alliance to ensure compliance with standard device profiles. Each device profile has an icon which may be used to visually match interoperable devices sharing the same device profiles.[4] The ANT+ specification is publicly available. AtDEF CON2019, hacker Brad Dixon demonstrated a tool to modify ANT+ data transmitted throughUSBfor cheating in virtual cycling.[22]
https://en.wikipedia.org/wiki/ANT_(network)
Empiricalmethods Prescriptiveand policy Astock market,equity market, orshare marketis the aggregation of buyers and sellers ofstocks(also called shares), which representownershipclaims on businesses; these may includesecuritieslisted on a publicstock exchangeas well as stock that is only traded privately, such as shares of private companies that are sold toinvestorsthroughequity crowdfundingplatforms. Investments are usually made with aninvestment strategyin mind. The totalmarket capitalizationof all publicly traded stocks worldwide rose fromUS$2.5 trillion in 1980 to US$111 trillion by the end of 2023.[1] As of 2016[update], there are 60 stock exchanges in the world. Of these, there are 16 exchanges with amarket capitalizationof $1 trillion or more, and they account for 87% ofglobal marketcapitalization. Apart from theAustralian Securities Exchange, these 16 exchanges are all inNorth America,Europe, orAsia.[2] By country, the largest stock markets as of January 2022 are in the United States of America (about 59.9%), followed by Japan (about 6.2%) and United Kingdom (about 3.9%).[3] Astock exchangeis anexchange(or bourse) wherestockbrokersandtraderscan buy and sellshares(equitystock),bonds, and othersecurities. Manylarge companieshave their stocks listed on a stock exchange. This makes the stock more liquid and thus more attractive to many investors. The exchange may also act as a guarantor of settlement. These and other stocks may also be traded "over the counter" (OTC), that is, through a dealer. Some large companies will have their stock listed on more than one exchange in different countries, so as to attract international investors.[4] Stock exchanges may also cover other types of securities, such as fixed-interest securities (bonds) or (less frequently) derivatives, which are more likely to be traded OTC. Trade in stock markets means the transfer (in exchange for money) of a stock or security from a seller to a buyer. This requires these two parties to agree on a price.Equities(stocks or shares) confer an ownership interest in a particular company. Participants in the stock market range from small individualstock investorsto larger investors, who can be based anywhere in the world, and may includebanks,insurancecompanies,pension fundsandhedge funds. Their buy or sell orders may be executed on their behalf by a stock exchangetrader. Some exchanges are physical locations where transactions are carried out on a trading floor, by a method known asopen outcry. This method is used in some stock exchanges andcommodities exchanges, and involves traders shouting bid and offer prices. The other type of stock exchange has a network of computers where trades are made electronically. An example of such an exchange is theNASDAQ. A potential buyerbidsa specific price for a stock, and a potential sellerasksa specific price for the same stock. Buying or sellingat theMarketmeans you will acceptanyask price or bid price for the stock. When the bid and ask prices match, a sale takes place, on a first-come, first-served basis if there are multiple bidders at a given price. The purpose of a stock exchange is to facilitate the exchange of securities between buyers and sellers, thus providing amarketplace. The exchanges provide real-time trading information on the listed securities, facilitatingprice discovery. TheNew York Stock Exchange(NYSE) is a physical exchange, with ahybrid marketfor placing orders electronically from any location as well as on thetrading floor. Orders executed on the trading floor enter by way of exchange members and flow down to afloor broker, who submits the order electronically to the floor trading post for the Designatedmarket maker("DMM") for that stock to trade the order. The DMM's job is to maintain a two-sided market, making orders to buy and sell the security when there are no other buyers or sellers. If abid–ask spreadexists, no trade immediately takes place – in this case, the DMM may use their own resources (money or stock) to close the difference. Once a trade has been made, the details are reported on the "tape" and sent back to the brokerage firm, which then notifies the investor who placed the order. Computers play an important role, especially forprogram trading. TheNASDAQis an electronic exchange, where all of the trading is done over acomputer network. The process is similar to the New York Stock Exchange. One or more NASDAQmarket makerswill always provide a bid and ask the price at which they will always purchase or sell 'their' stock. TheParis Bourse, now part ofEuronext, is an order-driven, electronic stock exchange. It was automated in the late 1980s. Prior to the 1980s, it consisted of an open outcry exchange.Stockbrokersmet on the trading floor of the Palais Brongniart. In 1986, theCATS trading systemwas introduced, and theorder matching systemwas fully automated. People trading stock will prefer to trade on the mostpopular exchangesince this gives the largest number of potential counter parties (buyers for a seller, sellers for a buyer) and probably the best price. However, there have always been alternatives such as brokers trying to bring parties together to trade outside the exchange. Some third markets that were popular areInstinet, and later Island and Archipelago (the latter two have since been acquired by Nasdaq and NYSE, respectively). One advantage is that this avoids thecommissionsof the exchange. However, it also has problems such asadverse selection.[5]Financial regulators have probeddark pools.[6][7] Market participantsinclude individual retail investors,institutional investors(e.g.,pension funds,insurance companies,mutual funds,index funds,exchange-traded funds,hedge funds, investor groups, banks and various otherfinancial institutions), and also publicly traded corporations trading in their own shares.Robo-advisors, which automate investment for individuals are also major participants. In 2021, the value of world stock markets experienced an increase of 26.5%, amounting to US$22.3 trillion. Developing economies contributed US$9.9 trillion and developed economies US$12.4 trillion. Asia and Oceania accounted for 45%, Europe had 37%, and America had 16%, while Africa had 2% of the global market.[8] Factors such as high trading prices, market ratings, information about stock exchange dynamics, and financial institutions can influence individual and corporate participation in stock markets. Additionally, the appeal of stock ownership, driven by the potential for higher returns compared to other financial instruments, plays a crucial role in attracting individuals to invest in the stock market. Regional and country-specific factors can also impact stock market participation rates. For example, in the United States, stock market participation rates vary widely across states, with regional factors potentially influencing these disparities. It is noted that individual participation costs alone cannot explain such large differences in participation rates from state to state, indicating the presence of other regional factors at play.[9] Behavioral factors are recognized as significant influences on stock market participation, as evidenced by the low participation rates observed in the Ghanaian stock market.[10] Factors such as factor endowments, geography, political stability,liberal trade policies, foreign direct investment inflows, and domestic industrial capacity are also identified as important in determining participation.[11] Indirect investment involves owning shares indirectly, such as via a mutual fund or an exchange traded fund. Direct investment involves direct ownership of shares.[12] Direct ownership of stock by individuals rose slightly from 17.8% in 1992 to 17.9% in 2007, with the median value of these holdings rising from $14,778 to $17,000.[13][14]Indirect participation in the form of retirement accounts rose from 39.3% in 1992 to 52.6% in 2007, with the median value of these accounts more than doubling from $22,000 to $45,000 in that time.[13][14]Rydqvist, Spizman, andStrebulaevattribute the differential growth in direct and indirect holdings to differences in the way each are taxed in the United States. Investments in pension funds and 401ks, the two most common vehicles of indirect participation, are taxed only when funds are withdrawn from the accounts. Conversely, the money used to directly purchase stock is subject to taxation as are any dividends or capital gains they generate for the holder. In this way, the current tax code incentivizes individuals to invest indirectly.[15] Rates of participation and the value of holdings differ significantly across strata of income. In the bottom quintile of income, 5.5% of households directly own stock and 10.7% hold stocks indirectly in the form of retirement accounts.[14]The top decile of income has a direct participation rate of 47.5% and an indirect participation rate in the form of retirement accounts of 89.6%.[14]The median value of directly owned stock in the bottom quintile of income is $4,000 and is $78,600 in the top decile of income as of 2007.[16]The median value of indirectly held stock in the form of retirement accounts for the same two groups in the same year is $6,300 and $214,800 respectively.[16]Since the Great Recession of 2008 households in the bottom half of theincome distributionhave lessened their participation rate both directly and indirectly from 53.2% in 2007 to 48.8% in 2013, while over the same period households in the top decile of the income distribution slightly increased participation 91.7% to 92.1%.[17]The mean value of direct and indirect holdings at the bottom half of the income distribution moved slightly downward from $53,800 in 2007 to $53,600 in 2013.[17]In the top decile, mean value of all holdings fell from $982,000 to $969,300 in the same time.[17]The mean value of all stock holdings across the entire income distribution is valued at $269,900 as of 2013.[17] The racial composition of stock market ownership shows households headed by whites are nearly four and six times as likely to directly own stocks than households headed by blacks and Hispanics respectively. As of 2011 the national rate of direct participation was 19.6%, for white households the participation rate was 24.5%, for black households it was 6.4% and for Hispanic households it was 4.3%. Indirect participation in the form of 401k ownership shows a similar pattern with a national participation rate of 42.1%, a rate of 46.4% for white households, 31.7% for black households, and 25.8% for Hispanic households. Households headed by married couples participated at rates above the national averages with 25.6% participating directly and 53.4% participating indirectly through a retirement account. 14.7% of households headed by men participated in the market directly and 33.4% owned stock through a retirement account. 12.6% of female-headed households directly owned stock and 28.7% owned stock indirectly.[14] In a 2003 paper by Vissing-Jørgensen attempts to explain disproportionate rates of participation along wealth and income groups as a function of fixed costs associated with investing. Her research concludes that a fixed cost of $200 per year is sufficient to explain why nearly half of all U.S. households do not participate in the market.[18]Participation rates have been shown to strongly correlate with education levels, promoting the hypothesis that information and transaction costs of market participation are better absorbed by more educated households. Behavioral economists Harrison Hong, Jeffrey Kubik and Jeremy Stein suggest that sociability and participation rates of communities have a statistically significant impact on an individual's decision to participate in the market. Their research indicates that social individuals living in states with higher than average participation rates are 5% more likely to participate than individuals that do not share those characteristics.[19]This phenomenon also explained in cost terms. Knowledge of market functioning diffuses through communities and consequently lowers transaction costs associated with investing. In 12th-century France, the courtiersde changewere concerned with managing and regulating the debts of agricultural communities on behalf of the banks. Because these men also traded with debts, they could be called the firstbrokers. The Italian historian Lodovico Guicciardini described how, in late 13th-centuryBruges, commodity traders gathered outdoors at a market square containing an inn owned by a family calledVan der Beurze, and in 1409 they became the "Brugse Beurse", institutionalizing what had been, until then, an informal meeting.[20]The idea quickly spread aroundFlandersand neighboring countries and "Beurzen" soon opened inGhentandRotterdam. International traders, and specially the Italian bankers, present in Bruges since the early 13th-century, took back the word in their countries to define the place for stock market exchange: first the Italians (Borsa), but soon also the French (Bourse), the Germans (börse), Russians (birža), Czechs (burza), Swedes (börs), Danes and Norwegians (børs). In most languages, the word coincides with that for money bag, dating back to the Latin bursa, from which obviously also derives the name of the Van der Beurse family. In the middle of the13th century,Venetianbankers began to trade in government securities. In 1351 the Venetian government outlawed spreading rumors intended to lower the price of government funds. Bankers inPisa,Verona,GenoaandFlorencealso began trading in government securities during the 14th century. This was only possible because these were independent city-states not ruled by a duke but a council of influential citizens. Italian companies were also the first to issue shares. Companies in England and the Low Countries followed in the 16th century. Around this time, ajoint stock company—one whose stock is owned jointly by the shareholders—emerged and became important for the colonization of what Europeans called the "New World".[21] There are now stock markets in virtually every developed and most developing economies, with the world's largest markets being in the United States, United Kingdom, Japan,India, China,Canada, Germany, France,South Koreaand theNetherlands.[22] Even in the days beforeperestroika,socialismwas never a monolith. Within theCommunist countries, the spectrum of socialism ranged from thequasi-market, quasi-syndicalistsystemof Yugoslaviato the centralizedtotalitarianismofneighboring Albania. One time I asked Professorvon Mises, the great expert on the economics of socialism, at what point on this spectrum of statism would he designate a country as "socialist" or not. At that time, I wasn't sure that any definite criterion existed to make that sort of clear-cut judgment. And so I was pleasantly surprised at the clarity and decisiveness of Mises's answer. "A stock market," he answered promptly. "A stock market is crucial to the existence ofcapitalismandprivate property. For it means that there is a functioning market in the exchange of private titles to themeans of production. There can be no genuine private ownership of capital without a stock market: there can be no true socialism if such a market is allowed to exist." The stock market is one of the most important ways forcompaniesto raise money, along with debt markets which are generally more imposing but do not trade publicly.[24]This allows businesses to be publicly traded, and raise additional financial capital for expansion by selling shares of ownership of the company in a public market. Theliquiditythat an exchange affords the investors enables their holders to quickly and easily sell securities. This is an attractive feature of investing in stocks, compared to other less liquid investments such aspropertyand other immoveable assets. History has shown that the price ofstocksand other assets is an important part of the dynamics of economic activity, and can influence or be an indicator of social mood. An economy where the stock market is on the rise is considered to be an up-and-coming economy. The stock market is often considered the primary indicator of a country's economic strength and development.[25] Rising share prices, for instance, tend to be associated with increased business investment and vice versa. Share prices also affect the wealth of households and their consumption. Therefore,central bankstend to keep an eye on the control and behavior of the stock market and, in general, on the smooth operation offinancial systemfunctions. Financial stability is theraison d'êtreof central banks.[26] Exchanges also act as the clearinghouse for each transaction, meaning that they collect and deliver the shares, and guarantee payment to the seller of a security. This eliminates the risk to an individual buyer or seller that thecounterpartycould default on the transaction.[27] The smooth functioning of all these activities facilitateseconomic growthin that lower costs and enterprise risks promote the production of goods and services as well as possibly employment. In this way the financial system is assumed to contribute to increased prosperity, although some controversy exists as to whether the optimal financial system is bank-based or market-based.[28] Events such as the2008 financial crisishave prompted a heightened degree of scrutiny of the impact of the structure of stock markets[29][30](calledmarket microstructure), in particular to the stability of the financial system and the transmission ofsystemic risk.[31] A transformation is the move toelectronic tradingto replace human trading of listedsecurities.[30] Changes in stock prices are mostly caused by external factors such associoeconomicconditions, inflation, exchange rates.Intellectual capitaldoes not affect a company stock's current earnings.Intellectual capitalcontributes to a stock's return growth.[32] Theefficient-market hypothesis(EMH) is a hypothesis in financial economics that states that asset prices reflect all available information at the current time. The 'hard'efficient-market hypothesisdoes not explain the cause of events such as thecrash in 1987, when theDow Jones Industrial Averageplummeted 22.6 percent—the largest-ever one-day fall in the United States.[33] This event demonstrated that share prices can fall dramatically even though no generally agreed upon definite cause has been found: a thorough search failed to detectany'reasonable' development that might have accounted for the crash. (Such events are predicted to occur strictly byrandomness, although very rarely.) It seems also to be true more generally that many price movements (beyond those which are predicted to occur 'randomly') arenotoccasioned by new information; a study of the fifty largest one-day share price movements in the United States in the post-war period seems to confirm this.[33] A 'soft' EMH has emerged which does not require that prices remain at or near equilibrium, but only that market participants cannotsystematicallyprofit from any momentary 'market anomaly'. Moreover, while EMH predicts that all price movement (in the absence of change in fundamental information) is random (i.e. non-trending)[dubious–discuss],[34]many studies have shown a marked tendency for the stock market to trend over time periods of weeks or longer. Various explanations for such large and apparently non-random price movements have been promulgated. For instance, some research has shown that changes in estimated risk, and the use of certain strategies, such as stop-loss limits andvalue at risklimits,theoretically couldcause financial markets to overreact. But the best explanation seems to be that the distribution of stock market prices is non-Gaussian[35](in which case EMH, in any of its current forms, would not be strictly applicable).[36][37] Other research has shown that psychological factors may result inexaggerated(statistically anomalous) stock price movements (contrary to EMH which assumes such behaviors 'cancel out'). Psychological research has demonstrated that people are predisposed to 'seeing' patterns, and often will perceive a pattern in what is, in fact, justnoise, e.g. seeing familiar shapes in clouds or ink blots. In the present context, this means that a succession of good news items about a company may lead investors to overreact positively, driving the price up. A period of good returns also boosts the investors' self-confidence, reducing their (psychological) risk threshold.[38] Another phenomenon—also from psychology—that works against anobjectiveassessment isgroup thinking. As social animals, it is not easy to stick to an opinion that differs markedly from that of a majority of the group. An example with which one may be familiar is the reluctance to enter a restaurant that is empty; people generally prefer to have their opinion validated by those of others in the group. In one paper the authors draw an analogy withgambling.[39]In normal times the market behaves like a game ofroulette; the probabilities are known and largely independent of the investment decisions of the different players. In times of market stress, however, the game becomes more like poker (herding behavior takes over). The players now must give heavy weight to the psychology of other investors and how they are likely to react psychologically.[40] Stock markets play an essential role in growing industries that ultimately affect the economy through transferring available funds from units that have excess funds (savings) to those who are suffering from funds deficit (borrowings) (Padhi and Naik, 2012). In other words, capital markets facilitate funds movement between the above-mentioned units. This process leads to the enhancement of available financial resources which in turn affects the economic growth positively. Economic and financial theories argue that stock prices are affected by macroeconomic trends. Macroeconomic trends include such as changes in GDP, unemployment rates, national income, price indices, output, consumption, unemployment, inflation, saving, investment, energy, international trade, immigration, productivity, aging populations, innovations, international finance.[41]increasing corporate profit, increasing profit margins, higher concentration of business, lower company income, less vigorous activity, less progress, lower investment rates, lower productivity growth, less employee share of corporate revenues,[42]decreasing Worker to Beneficiary ratio (year 1960 5:1, year 2009 3:1, year 2030 2.2:1),[43]increasing female to male ratio college graduates.[44] Sometimes, the market seems to react irrationally to economic or financial news, even if that news is likely to have no real effect on the fundamental value of securities itself.[45]However, this market behaviour may be more apparent than real, since often such news was anticipated, and a counter reaction may occur if the news is better (or worse) than expected. Therefore, the stock market may be swayed in either direction by press releases, rumors,euphoriaandmass panic. Over the short-term, stocks and other securities can be battered or bought by any number of fast market-changing events, making the stock market behavior difficult to predict. Emotions can drive prices up and down, people are generally not as rational as they think, and the reasons for buying and selling are generally accepted. Behaviorists argue that investors often behaveirrationallywhen making investment decisions thereby incorrectly pricing securities, which causes market inefficiencies, which, in turn, are opportunities to make money.[46]However, the whole notion of EMH is that these non-rational reactions to information cancel out, leaving the prices of stocks rationally determined. A stock market crash is often defined as a sharp dip inshare pricesofstockslisted on the stock exchanges. In parallel with various economic factors, a reason for stock market crashes is also due to panic and investing public's loss of confidence. Often, stock market crashes end speculativeeconomic bubbles. There have been famousstock market crashesthat have ended in the loss of billions of dollars and wealth destruction on a massive scale. An increasing number of people are involved in the stock market, especially since thesocial securityandretirement plansare being increasingly privatized and linked tostocksand bonds and other elements of the market. There have been a number of famous stock market crashes like theWall Street Crash of 1929, thestock market crash of 1973–4, theBlack Monday of 1987, theDot-com bubbleof 2000, and the Stock Market Crash of 2008. One of the most famous stock market crashes started October 24, 1929, on Black Thursday. TheDow Jones Industrial Averagelost 50% during this stock market crash. It was the beginning of theGreat Depression. Another famous crash took place on October 19, 1987 – Black Monday. The crash began in Hong Kong and quickly spread around the world. By the end of October, stock markets in Hong Kong had fallen 45.5%, Australia 41.8%, Spain 31%, the United Kingdom 26.4%, the United States 22.68%, and Canada 22.5%. Black Monday itself was the largest one-day percentage decline in stock market history – the Dow Jones fell by 22.6% in a day. The names "Black Monday" and "Black Tuesday" are also used for October 28–29, 1929, which followed Terrible Thursday—the starting day of the stock market crash in 1929. The crash in 1987 raised some puzzles – main news and events did not predict the catastrophe and visible reasons for the collapse were not identified. This event raised questions about many important assumptions of modern economics, namely, thetheory of rational human conduct, thetheory of market equilibriumand theefficient-market hypothesis. For some time after the crash, trading in stock exchanges worldwide was halted, since the exchange computers did not perform well owing to enormous quantity of trades being received at one time. This halt in trading allowed theFederal Reserve Systemand central banks of other countries to take measures to control the spreading of afinancial crisis. In the United States the SEC introduced several new measures of control into the stock market in an attempt to prevent a re-occurrence of the events of Black Monday. This marked the beginning of theGreat Recession. Starting in 2007 and lasting through 2009, financial markets experienced one of the sharpest declines in decades. It was more widespread than just the stock market as well. The housing market, lending market, and even global trade experienced unimaginable decline. Sub-prime lending led to the housing bubble bursting and was made famous by movies likeThe Big Shortwhere those holding large mortgages were unwittingly falling prey to lenders. This saw banks and major financial institutions completely fail in many cases and took major government intervention to remedy during the period. From October 2007 to March 2009, the S&P 500 fell 57% and wouldn't recover to its 2007 levels until April 2013. The 2020 stock market crash was a major and sudden global stock market crash that began on 20 February 2020 and ended on 7 April. This market crash was due to the sudden outbreak of the global pandemic,COVID-19. The crash ended with a new deal that had a positive impact on the market.[48] Since the early 1990s, many of the largest exchanges have adopted electronic 'matching engines' to bring together buyers and sellers, replacing the open outcry system. Electronic trading now accounts for the majority of trading in many developed countries. Computer systems were upgraded in the stock exchanges to handle larger trading volumes in a more accurate and controlled manner. The SEC modified the margin requirements in an attempt to lower thevolatilityof common stocks, stock options and the futures market. TheNew York Stock Exchangeand theChicago Mercantile Exchangeintroduced the concept of a circuit breaker. The circuit breaker halts trading if the Dow declines a prescribed number of points for a prescribed amount of time. In February 2012, the Investment Industry Regulatory Organization of Canada (IIROC) introduced single-stock circuit breakers.[49] The movements of the prices in global, regional or local markets are captured in price indices called stock market indices, of which there are many, e.g. theS&P, theFTSE, theEuronextindices and theNIFTY&SENSEXof India. Such indices are usuallymarket capitalizationweighted, with the weights reflecting the contribution of the stock to the index. The constituents of the index are reviewed frequently to include/exclude stocks in order to reflect the changing business environment. Financial innovation has brought many new financial instruments whose pay-offs or values depend on the prices of stocks. Some examples areexchange-traded funds(ETFs),stock indexandstock options,equity swaps,single-stock futures, and stock indexfutures. These last two may be traded onfutures exchanges(which are distinct from stock exchanges—their history traces back tocommodityfutures exchanges), or tradedover-the-counter. As all of these products are onlyderivedfrom stocks, they are sometimes considered to be traded in a (hypothetical)derivatives market, rather than the (hypothetical) stock market. Stock that a trader does not actually own may be traded usingshort selling;margin buyingmay be used to purchase stock with borrowed funds; or,derivativesmay be used to control large blocks of stocks for a much smaller amount of money than would be required by outright purchase or sales. In short selling, the trader borrows stock (usually from his brokerage which holds its clients shares or its own shares on account to lend to short sellers) then sells it on the market, betting that the price will fall. The trader eventually buys back the stock, making money if the price fell in the meantime and losing money if it rose. Exiting a short position by buying back the stock is called "covering". This strategy may also be used by unscrupulous traders in illiquid or thinly traded markets to artificially lower the price of a stock. Hence most markets either prevent short selling or place restrictions on when and how a short sale can occur. The practice ofnaked shortingis illegal in most (but not all) stock markets. In margin buying, the trader borrows money (at interest) to buy a stock and hopes for it to rise. Most industrialized countries have regulations that require that if the borrowing is based on collateral from other stocks the trader owns outright, it can be a maximum of a certain percentage of those other stocks' value. In the United States, the margin requirements have been 50% for many years (that is, if you want to make a $1000 investment, you need to put up $500, and there is often a maintenance margin below the $500). A margin call is made if the total value of the investor's account cannot support the loss of the trade. (Upon a decline in the value of the margined securities additional funds may be required to maintain the account's equity, and with or without notice the margined security or any others within the account may be sold by the brokerage to protect its loan position. The investor is responsible for any shortfall following such forced sales.) Regulation of margin requirements (by theFederal Reserve) was implemented after theCrash of 1929. Before that, speculators typically only needed to put up as little as 10 percent (or even less) of the totalinvestmentrepresented by the stocks purchased. Other rules may include the prohibition offree-riding:putting in an order to buy stocks without paying initially (there is normally a three-day grace period for delivery of the stock), but then selling them (before the three-days are up) and using part of the proceeds to make the original payment (assuming that the value of the stocks has not declined in the interim). Financial markets can be divided into different subtypes: While the stock market is the marketplace for buying and selling company stocks, the foreign exchange market, also known asforexor FX, is the global marketplace for the purchase and sale of national currencies. It serves several functions, including facilitating currency conversions, managing foreign exchange risk through futures and forwards, and providing a platform for speculative investors to earn a profit on FX trading. The market includes various types of products, such as thespot market,futures market, forward market,swap market, and options market. For example, the spot market involves the immediate buying and selling of currencies, while the forward market allows for the buying and selling of currencies at an agreed exchange rate, with the actual exchange taking place at a future delivery date. The foreign exchange market is needed for facilitating global trade, including investments, the exchange of goods and services, and financial transactions, and it is considered one of the largest markets in the global economy.[52][53] The electronic trading market refers to the digital marketplace where financial instruments such as stocks, bonds, currencies, commodities, and derivatives are bought and sold through online platforms. This market operates viaelectronic trading platforms, also known as online trading platforms, which are software applications that enable the trading of financial products over a network, typically through a financial intermediary. Platforms, such aseToro,Plus500,Robinhood, and AvaTrade serve as a digital medium for trading financial instruments and make financial markets more accessible, allowing individual investors to participate in trading without the need for traditional brokers or substantial capital. They also provide features such as real-time market data, stock price analysis, research reports, and news updates, which support decision-making in trading activities.[54] These platforms often incorporate systems, such as theMartingale Trading System, used in forex trading. Additionally, online trading has evolved to includemobile trading apps, enabling transactions to be conducted remotely via smartphones.[55] Many strategies can be classified as eitherfundamental analysisortechnical analysis.Fundamental analysisrefers to analyzing companies by theirfinancial statementsfound inSEC filings, business trends, and general economic conditions.Technical analysisstudies price actions in markets through the use of charts and quantitative techniques to attempt to forecast price trends based on historical performance, regardless of the company's financial prospects. One example of a technical strategy is theTrend followingmethod, used byJohn W. HenryandEd Seykota, which uses price patterns and is also rooted inrisk managementanddiversification. Additionally, many choose to invest via passiveindex funds. In this method, one holds a portfolio of the entire stock market or some segment of the stock market (such as theS&P 500 IndexorWilshire 5000). The principal aim of this strategy is to maximize diversification, minimize taxes from realizing gains, and ride the general trend of the stock market to rise. Responsible investment emphasizes and requires a long-term horizon on the basis offundamental analysisonly, avoiding hazards in the expected return of the investment.Socially responsible investingis another investment preference. The average annual growth rate of the stock market, as measured by theS&P 500 index, has historically been around 10%.[56]This figure represents the long-term average return and is often cited as a benchmark for assessing the performance of the stock market as a whole. The market's results from one year to the next may vary substantially from the long-term average. For instance, in 2012–2021, the S&P 500 index had an average annual return of 14.8%.[57]However, individual annual returns can fluctuate widely, with some years experiencing negative growth and others seeing substantial gains. While the average stock market return is around 10% per year, there is also the impact ofinflation, resulting in investors' losing purchasing power of 2% to 3% every year due to it, which reduces the real rate of return on investments.[58] Taxation is a consideration of all investment strategies; profit from owning stocks, including dividends received, is subject to different tax rates depending on the type of security and the holding period. Most profit from stock investing is taxed via acapital gains tax. In many countries, the corporations pay taxes to the government and the shareholders once again pay taxes when they profit from owning the stock, known as "double taxation". The Indianstock exchanges,Bombay Stock ExchangeandNational Stock Exchange of India, have been rocked by several high-profile corruption scandals.[59][60]At times, theSecurities and Exchange Board of India(SEBI) has barred various individuals and entities from trading on the exchanges forstock manipulation, especially inilliquidsmall-capandpenny stocks.[61][62][63]
https://en.wikipedia.org/wiki/Stock_market
t-distributed stochastic neighbor embedding(t-SNE) is astatisticalmethod for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed byGeoffrey Hintonand Sam Roweis,[1]where Laurens van der Maaten and Hinton proposed thet-distributedvariant.[2]It is anonlinear dimensionality reductiontechnique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. The t-SNE algorithm comprises two main stages. First, t-SNE constructs aprobability distributionover pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes theKullback–Leibler divergence(KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses theEuclidean distancebetween objects as the base of its similarity metric, this can be changed as appropriate. ARiemannianvariant isUMAP. t-SNE has been used for visualization in a wide range of applications, includinggenomics,computer securityresearch,[3]natural language processing,music analysis,[4]cancer research,[5]bioinformatics,[6]geological domain interpretation,[7][8][9]and biomedical signal processing.[10] For a data set withnelements, t-SNE runs inO(n2)time and requiresO(n2)space.[11] Given a set ofN{\displaystyle N}high-dimensional objectsx1,…,xN{\displaystyle \mathbf {x} _{1},\dots ,\mathbf {x} _{N}}, t-SNE first computes probabilitiespij{\displaystyle p_{ij}}that are proportional to the similarity of objectsxi{\displaystyle \mathbf {x} _{i}}andxj{\displaystyle \mathbf {x} _{j}}, as follows. Fori≠j{\displaystyle i\neq j}, define and setpi∣i=0{\displaystyle p_{i\mid i}=0}. Note the above denominator ensures∑jpj∣i=1{\displaystyle \sum _{j}p_{j\mid i}=1}for alli{\displaystyle i}. As van der Maaten and Hinton explained: "The similarity of datapointxj{\displaystyle x_{j}}to datapointxi{\displaystyle x_{i}}is the conditional probability,pj|i{\displaystyle p_{j|i}}, thatxi{\displaystyle x_{i}}would pickxj{\displaystyle x_{j}}as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered atxi{\displaystyle x_{i}}."[2] Now define This is motivated becausepi{\displaystyle p_{i}}andpj{\displaystyle p_{j}}from the N samples are estimated as 1/N, so the conditional probability can be written aspi∣j=Npij{\displaystyle p_{i\mid j}=Np_{ij}}andpj∣i=Npji{\displaystyle p_{j\mid i}=Np_{ji}}. Sincepij=pji{\displaystyle p_{ij}=p_{ji}}, you can obtain previous formula. Also note thatpii=0{\displaystyle p_{ii}=0}and∑i,jpij=1{\displaystyle \sum _{i,j}p_{ij}=1}. The bandwidth of theGaussian kernelsσi{\displaystyle \sigma _{i}}is set in such a way that theentropyof the conditional distribution equals a predefined entropy using thebisection method. As a result, the bandwidth is adapted to thedensityof the data: smaller values ofσi{\displaystyle \sigma _{i}}are used in denser parts of the data space. The entropy increases with theperplexityof this distributionPi{\displaystyle P_{i}}; this relation is seen as whereH(Pi){\displaystyle H(P_{i})}is the Shannon entropyH(Pi)=−∑jpj|ilog2⁡pj|i.{\displaystyle H(P_{i})=-\sum _{j}p_{j|i}\log _{2}p_{j|i}.} The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.".[2] Since the Gaussian kernel uses theEuclidean distance‖xi−xj‖{\displaystyle \lVert x_{i}-x_{j}\rVert }, it is affected by thecurse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, thepij{\displaystyle p_{ij}}become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on theintrinsic dimensionof each point, to alleviate this.[12] t-SNE aims to learn ad{\displaystyle d}-dimensional mapy1,…,yN{\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{N}}(withyi∈Rd{\displaystyle \mathbf {y} _{i}\in \mathbb {R} ^{d}}andd{\displaystyle d}typically chosen as 2 or 3) that reflects the similaritiespij{\displaystyle p_{ij}}as well as possible. To this end, it measures similaritiesqij{\displaystyle q_{ij}}between two points in the mapyi{\displaystyle \mathbf {y} _{i}}andyj{\displaystyle \mathbf {y} _{j}}, using a very similar approach. Specifically, fori≠j{\displaystyle i\neq j}, defineqij{\displaystyle q_{ij}}as and setqii=0{\displaystyle q_{ii}=0}. Herein a heavy-tailedStudent t-distribution(with one-degree of freedom, which is the same as aCauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map. The locations of the pointsyi{\displaystyle \mathbf {y} _{i}}in the map are determined by minimizing the (non-symmetric)Kullback–Leibler divergenceof the distributionP{\displaystyle P}from the distributionQ{\displaystyle Q}, that is: The minimization of the Kullback–Leibler divergence with respect to the pointsyi{\displaystyle \mathbf {y} _{i}}is performed usinggradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs. While t-SNE plots often seem to displayclusters, the visual clusters can be strongly influenced by the chosen parameterization (especially the perplexity) and so a good understanding of the parameters for t-SNE is needed. Such "clusters" can be shown to even appear in structured data with no clear clustering,[13]and so may be false findings. Similarly, the size of clusters produced by t-SNE is not informative, and neither is the distance between clusters.[14]Thus, interactive exploration may be needed to choose parameters and validate results.[15][16]It has been shown that t-SNE can often recover well-separated clusters, and with special parameter choices, approximates a simple form ofspectral clustering.[17]
https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
Indecision theory, ascoring rule[1]provides evaluation metrics forprobabilistic predictions or forecasts. While "regular" loss functions (such asmean squared error) assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, ascoring function[2]provides a summary measure for the evaluation of point predictions, i.e. one predicts a property orfunctionalT(F){\displaystyle T(F)}, like theexpectationor themedian. Scoring rules answer the question "how good is a predicted probability distribution compared to an observation?" Scoring rules that are(strictly) properare proven to have the lowest expected score if the predicted distribution equals the underlying distribution of the target variable. Although this might differ for individual observations, this should result in a minimization of the expected score if the "correct" distributions are predicted. Scoring rules and scoring functions are often used as "cost functions" or "loss functions" of probabilistic forecasting models. They are evaluated as the empirical mean of a given sample, the "score". Scores of different predictions or models can then be compared to conclude which model is best. For example, consider a model, that predicts (based on an inputx{\displaystyle x}) a meanμ∈R{\displaystyle \mu \in \mathbb {R} }and standard deviationσ∈R+{\displaystyle \sigma \in \mathbb {R} _{+}}. Together, those variables define agaussian distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}, in essence predicting the target variable as a probability distribution. A common interpretation of probabilistic models is that they aim to quantify their own predictive uncertainty. In this example, an observed target variabley∈R{\displaystyle y\in \mathbb {R} }is then held compared to the predicted distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}and assigned a scoreL(N(μ,σ2),y)∈R{\displaystyle {\mathcal {L}}({\mathcal {N}}(\mu ,\sigma ^{2}),y)\in \mathbb {R} }. When training on a scoring rule, it should "teach" a probabilistic model to predict when its uncertainty is low, and when its uncertainty is high, and it should result incalibratedpredictions, while minimizing the predictive uncertainty. Although the example given concerns the probabilistic forecasting of areal valuedtarget variable, a variety of different scoring rules have been designed with different target variables in mind. Scoring rules exist for binary and categoricalprobabilistic classification, as well as for univariate and multivariate probabilisticregression. Consider asample spaceΩ{\displaystyle \Omega }, aσ-algebraA{\displaystyle {\mathcal {A}}}of subsets ofΩ{\displaystyle \Omega }and a convex classF{\displaystyle {\mathcal {F}}}ofprobability measureson(Ω,A){\displaystyle (\Omega ,{\mathcal {A}})}. A function defined onΩ{\displaystyle \Omega }and taking values in the extended real line,R¯=[−∞,∞]{\displaystyle {\overline {\mathbb {R} }}=[-\infty ,\infty ]}, isF{\displaystyle {\mathcal {F}}}-quasi-integrable if it is measurable with respect toA{\displaystyle {\mathcal {A}}}and is quasi-integrable with respect to allF∈F{\displaystyle F\in {\mathcal {F}}}. A probabilistic forecast is any probability measureF∈F{\displaystyle F\in {\mathcal {F}}}. I.e. it is a distribution of potential future observations. A scoring rule is any extended real-valued functionS:F×Ω→R{\displaystyle \mathbf {S} :{\mathcal {F}}\times \Omega \rightarrow \mathbb {R} }such thatS(F,⋅){\displaystyle \mathbf {S} (F,\cdot )}isF{\displaystyle {\mathcal {F}}}-quasi-integrable for allF∈F{\displaystyle F\in {\mathcal {F}}}.S(F,y){\displaystyle \mathbf {S} (F,y)}represents the loss or penalty when the forecastF∈F{\displaystyle F\in {\mathcal {F}}}is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes. A point forecast is a functional, i.e. a potentially set-valued mappingF→T(F)⊆Ω{\displaystyle F\rightarrow T(F)\subseteq \Omega }. A scoring function is any real-valued functionS:Ω×Ω→R{\displaystyle S:\Omega \times \Omega \rightarrow \mathbb {R} }whereS(x,y){\displaystyle S(x,y)}represents the loss or penalty when the point forecastx∈Ω{\displaystyle x\in \Omega }is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes. Scoring rulesS(F,y){\displaystyle \mathbf {S} (F,y)}and scoring functionsS(x,y){\displaystyle S(x,y)}are negatively (positively) oriented if smaller (larger) values mean better. Here we adhere to negative orientation, hence the association with "loss". We write for the expected score of a predictionF{\displaystyle F}underQ∈F{\displaystyle Q\in {\mathcal {F}}}as the expected score of the predicted distributionF∈F{\displaystyle F\in {\mathcal {F}}}, when sampling observations from distributionQ{\displaystyle Q}. Many probabilistic forecasting models are training via the sample average score, in which a set of predicted distributionsF1,…,Fn∈F{\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}}is evaluated against a set of observationsy1,…,yn∈Ω{\displaystyle y_{1},\ldots ,y_{n}\in \Omega }. Strictly proper scoring rules and strictly consistent scoring functions encourage honest forecasts by maximization of the expected reward: If a forecaster is given a reward of−S(F,y){\displaystyle -\mathbf {S} (F,y)}ify{\displaystyle y}realizes (e.g.y=rain{\displaystyle y=rain}), then the highestexpectedreward (lowest score) is obtained by reporting the true probability distribution.[1] A scoring ruleS{\displaystyle \mathbf {S} }isproperrelative toF{\displaystyle {\mathcal {F}}}if (assuming negative orientation) its expected score is minimized when the forecasted distribution matches the distribution of the observation. It isstrictly properif the above equation holds with equality if and only ifF=Q{\displaystyle F=Q}. A scoring functionS{\displaystyle S}isconsistentfor the functionalT{\displaystyle T}relative to the classF{\displaystyle {\mathcal {F}}}if It is strictly consistent if it is consistent and equality in the above equation implies thatx∈T(F){\displaystyle x\in T(F)}. To enforce thatcorrect forecasts are always strictly preferred, Ahmadian et al. (2024) introduced twosuperiorvariants:Penalized Brier Score (PBS)andPenalized Logarithmic Loss (PLL), which add a fixed penalty whenever the predicted class (arg⁡maxp{\displaystyle \arg \max p}) differs from the true class (arg⁡maxy{\displaystyle \arg \max y}).[3] -PBSaugments the Brier score by adding(c−1)/c{\displaystyle (c-1)/c}for any misclassification (withc{\displaystyle c}the number of classes). -PLLaugments the logarithmic score by adding−log⁡(1/c){\displaystyle -\log(1/c)}for any misclassification. Despite these penalties, PBS and PLL remainstrictly proper. Their expected score is uniquely minimized when the forecast equals the true distribution, satisfying thesuperiorproperty that every correct classification is scored strictly better than any incorrect one. Note:Neither the standard Brier Score nor the logarithmic score satisfy thesuperiorcriterion. They remain strictly proper but can assign better scores to incorrect predictions than to certain correct ones—an issue resolved by PBS and PLL.[3] An example ofprobabilistic forecastingis in meteorology where aweather forecastermay give the probability of rain on the next day. One could note the number of times that a 25% probability was quoted, over a long period, and compare this with the actual proportion of times that rain fell. If the actual percentage was substantially different from the stated probability we say that the forecaster ispoorly calibrated. A poorly calibrated forecaster might be encouraged to do better by abonussystem. A bonus system designed around a proper scoring rule will incentivize the forecaster to report probabilities equal to hispersonal beliefs.[4] In addition to the simple case of abinary decision, such as assigning probabilities to 'rain' or 'no rain', scoring rules may be used for multiple classes, such as 'rain', 'snow', or 'clear', or continuous responses like the amount of rain per day. The image to the right shows an example of a scoring rule, the logarithmic scoring rule, as a function of the probability reported for the event that actually occurred. One way to use this rule would be as a cost based on the probability that a forecaster or algorithm assigns, then checking to see which event actually occurs. There are an infinite number of scoring rules, including entire parameterized families of strictly proper scoring rules. The ones shown below are simply popular examples. For a categorical response variable withm{\displaystyle m}mutually exclusive events,Y∈Ω={1,…,m}{\displaystyle Y\in \Omega =\{1,\ldots ,m\}}, a probabilistic forecaster or algorithm will return aprobability vectorr{\displaystyle \mathbf {r} }with a probability for each of them{\displaystyle m}outcomes. The logarithmic scoring rule is a local strictly proper scoring rule. This is also the negative ofsurprisal, which is commonly used as a scoring criterion inBayesian inference; the goal is to minimize expected surprise. This scoring rule has strong foundations ininformation theory. Here, the score is calculated as the logarithm of the probability estimate for the actual outcome. That is, a prediction of 80% that correctly proved true would receive a score ofln(0.8) = −0.22. This same prediction also assigns 20% likelihood to the opposite case, and so if the prediction proves false, it would receive a score based on the 20%:ln(0.2) = −1.6. The goal of a forecaster is to maximize the score and for the score to be as large as possible, and −0.22 is indeed larger than −1.6. If one treats the truth or falsity of the prediction as a variablexwith value 1 or 0 respectively, and the expressed probability asp, then one can write the logarithmic scoring rule asxln(p) + (1 −x) ln(1 −p). Note that any logarithmic base may be used, since strictly proper scoring rules remain strictly proper under linear transformation. That is: is strictly proper for allb>1{\displaystyle b>1}. The quadratic scoring rule is a strictly proper scoring rule whereri{\displaystyle r_{i}}is the probability assigned to the correct answer andC{\displaystyle C}is the number of classes. TheBrier score, originally proposed byGlenn W. Brierin 1950,[5]can be obtained by anaffine transformfrom the quadratic scoring rule. Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise andC{\displaystyle C}is the number of classes. An important difference between these two rules is that a forecaster should strive to maximize the quadratic scoreQ{\displaystyle Q}yet minimize the Brier scoreB{\displaystyle B}. This is due to a negative sign in the linear transformation between them. The spherical scoring rule is also a strictly proper scoring rule The ranked probability score[6](RPS) is a strictly proper scoring rule, that can be expressed as: Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise, andC{\displaystyle C}is the number of classes. Other than other scoring rules, the ranked probability score considers the distance between classes, i.e. classes 1 and 2 are considered closer than classes 1 and 3. The score assigns better scores to probabilistic forecasts with high probabilities assigned to classes close to the correct class. For example, when considering probabilistic forecastsr1=(0.5,0.5,0){\displaystyle \mathbf {r} _{1}=(0.5,0.5,0)}andr2=(0.5,0,0.5){\displaystyle \mathbf {r} _{2}=(0.5,0,0.5)}, we find thatRPS(r1,1)=0.25{\displaystyle RPS(\mathbf {r} _{1},1)=0.25}, whileRPS(r2,1)=0.5{\displaystyle RPS(\mathbf {r} _{2},1)=0.5}, despite both probabilistic forecasts assigning identical probability to the correct class. Shown below on the left is a graphical comparison of the Logarithmic, Quadratic, and Spherical scoring rules for abinary classificationproblem. Thex-axis indicates the reported probability for the event that actually occurred. It is important to note that each of the scores have different magnitudes and locations. The magnitude differences are not relevant however as scores remain proper under affine transformation. Therefore, to compare different scores it is necessary to move them to a common scale. A reasonable choice of normalization is shown at the picture on the right where all scores intersect the points (0.5,0) and (1,1). This ensures that they yield 0 for a uniform distribution (two probabilities of 0.5 each), reflecting no cost or reward for reporting what is often the baseline distribution. All normalized scores below also yield 1 when the true class is assigned a probability of 1. The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a univariate target variableX∈R{\displaystyle X\in \mathbb {R} }and have aprobability density functionf:R→R+{\displaystyle f:\mathbb {R} \to \mathbb {R} _{+}}. The logarithmic score is a local strictly proper scoring rule. It is defined as wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted distributionD{\displaystyle D}. It is a local, strictly proper scoring rule. The logarithmic score for continuous variables has strong ties toMaximum likelihood estimation. However, in many applications, the continuous ranked probability score is often preferred over the logarithmic score, as the logarithmic score can be heavily influenced by slight deviations in the tail densities of forecasted distributions.[7] The continuous ranked probability score (CRPS)[8]is a strictly proper scoring rule much used in meteorology. It is defined as whereFD{\displaystyle F_{D}}is thecumulative distribution functionof the forecasted distributionD{\displaystyle D},H{\displaystyle H}is theHeaviside step functionandy∈R{\displaystyle y\in \mathbb {R} }is the observation. For distributions with finite firstmoment, the continuous ranked probability score can be written as:[1] whereX{\displaystyle X}andX′{\displaystyle X'}are independent random variables, sampled from the distributionD{\displaystyle D}. Furthermore, when the cumulative probability functionF{\displaystyle F}is continuous, the continuous ranked probability score can also be written as[9] The continuous ranked probability score can be seen as both an continuous extension of the ranked probability score, as well asquantile regression. The continuous ranked probability score over theempirical distributionD^q{\displaystyle {\hat {D}}_{q}}of an ordered set pointsq1≤…≤qn{\displaystyle q_{1}\leq \ldots \leq q_{n}}(i.e. every point has1/n{\displaystyle 1/n}probability of occurring), is equal to twice the meanquantile lossapplied on those points with evenly spread quantiles(τ1,…,τn)=(1/(2n),…,(2n−1)/(2n)){\displaystyle (\tau _{1},\ldots ,\tau _{n})=(1/(2n),\ldots ,(2n-1)/(2n))}:[10] For many popular families of distributions,closed-form expressionsfor the continuous ranked probability score have been derived. The continuous ranked probability score has been used as a loss function forartificial neural networks, in which weather forecasts are postprocessed to aGaussian probability distribution.[11][12] CRPS was also adapted tosurvival analysisto cover censored events.[13] CRPS is also known asCramer–von Mises distanceand can be seen as an improvement ofWasserstein distance(often used in machine learning) and further Cramer distance performed better inordinal regressionthanKL distanceor the Wasserstein metric.[14] The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a multivariate target variableX∈Rn{\displaystyle X\in \mathbb {R} ^{n}}and have aprobability density functionf:Rn→R+{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} _{+}}. The multivariate logarithmic score is similar to the univariate logarithmic score: wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted multivariate distributionD{\displaystyle D}. It is a local, strictly proper scoring rule. The Hyvärinen scoring function (of a density p) is defined by[15] WhereΔ{\displaystyle \Delta }denotes theHessiantraceand∇{\displaystyle \nabla }denotes thegradient. This scoring rule can be used to computationally simplify parameter inference and address Bayesian model comparison with arbitrarily-vague priors.[15][16]It was also used to introduce new information-theoretic quantities beyond the existinginformation theory.[17] The energy score is a multivariate extension of the continuous ranked probability score:[1] Here,β∈(0,2){\displaystyle \beta \in (0,2)},‖‖2{\displaystyle \lVert \rVert _{2}}denotes then{\displaystyle n}-dimensionalEuclidean distanceandX,X′{\displaystyle X,X'}are independently sampled random variables from the probability distributionD{\displaystyle D}. The energy score is strictly proper for distributionsD{\displaystyle D}for whichEX∼D[‖X‖2]{\displaystyle \mathbb {E} _{X\sim D}[\lVert X\rVert _{2}]}is finite. It has been suggested that the energy score is somewhat ineffective when evaluating the intervariable dependency structure of the forecasted multivariate distribution.[18]The energy score is equal to twice theenergy distancebetween the predicted distribution and the empirical distribution of the observation. Thevariogramscore of orderp{\displaystyle p}is given by:[19] Here,wij{\displaystyle w_{ij}}are weights, often set to 1, andp>0{\displaystyle p>0}can be arbitrarily chosen, butp=0.5,1{\displaystyle p=0.5,1}or2{\displaystyle 2}are often used.Xi{\displaystyle X_{i}}is here to denote thei{\displaystyle i}'thmarginal random variableofX{\displaystyle X}. The variogram score is proper for distributions for which the(2p){\displaystyle (2p)}'thmomentis finite for all components, but is never strictly proper. Compared to the energy score, the variogram score is claimed to be more discriminative with respect to the predicted correlation structure. The conditional continuous ranked probability score (Conditional CRPS or CCRPS) is a family of (strictly) proper scoring rules. Conditional CRPS evaluates a forecasted multivariate distributionD{\displaystyle D}by evaluation of CRPS over a prescribed set of univariateconditional probability distributionsof the predicted multivariate distribution:[20] Here,Xi{\displaystyle X_{i}}is thei{\displaystyle i}'th marginal variable ofX∼D{\displaystyle X\sim D},T=(vi,Ci)i=1k{\displaystyle {\mathcal {T}}=(v_{i},{\mathcal {C}}_{i})_{i=1}^{k}}is a set of tuples that defines a conditional specification (withvi∈{1,…,n}{\displaystyle v_{i}\in \{1,\ldots ,n\}}andCi⊆{1,…,n}∖{vi}{\displaystyle {\mathcal {C}}_{i}\subseteq \{1,\ldots ,n\}\setminus \{v_{i}\}}), andPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}denotes the conditional probability distribution forXvi{\displaystyle X_{v_{i}}}given that all variablesXj{\displaystyle X_{j}}forj∈Ci{\displaystyle j\in {\mathcal {C}}_{i}}are equal to their respective observations. In the case thatPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}is ill-defined (i.e. its conditional event has zero likelihood), CRPS scores over this distribution are defined as infinite. Conditional CRPS is strictly proper for distributions with finite first moment, if thechain ruleis included in the conditional specification, meaning that there exists a permutationϕ1,…,ϕn{\displaystyle \phi _{1},\ldots ,\phi _{n}}of1,…,n{\displaystyle 1,\ldots ,n}such that for all1≤i≤n{\displaystyle 1\leq i\leq n}:(ϕi,{ϕ1,…,ϕi−1})∈T{\displaystyle (\phi _{i},\{\phi _{1},\ldots ,\phi _{i-1}\})\in {\mathcal {T}}}. All proper scoring rules are equal to weighted sums (integral with a non-negative weighting functional) of the losses in a set of simple two-alternative decision problems thatusethe probabilistic prediction, each such decision problem having a particular combination of associated cost parameters forfalse positive and false negativedecisions. Astrictlyproper scoring rule corresponds to having a nonzero weighting for all possible decision thresholds. Any given proper scoring rule is equal to the expected losses with respect to a particular probability distribution over the decision thresholds; thus the choice of a scoring rule corresponds to an assumption about the probability distribution of decision problems for which the predicted probabilities will ultimately be employed, with for example the quadratic loss (or Brier) scoring rule corresponding to a uniform probability of the decision threshold being anywhere between zero and one. Theclassification accuracy score(percent classified correctly), a single-threshold scoring rule which is zero or one depending on whether the predicted probability is on the appropriate side of 0.5, is a proper scoring rule but not a strictly proper scoring rule because it is optimized (in expectation) not only by predicting the true probability but by predictinganyprobability on the same side of 0.5 as the true probability.[21][22][23][24][25][26] A strictly proper scoring rule, whether binary or multiclass, after anaffine transformationremains a strictly proper scoring rule.[4]That is, ifS(r,i){\displaystyle S(\mathbf {r} ,i)}is a strictly proper scoring rule thena+bS(r,i){\displaystyle a+bS(\mathbf {r} ,i)}withb≠0{\displaystyle b\neq 0}is also a strictly proper scoring rule, though ifb<0{\displaystyle b<0}then the optimization sense of the scoring rule switches between maximization and minimization. A proper scoring rule is said to belocalif its estimate for the probability of a specific event depends only on the probability of that event. This statement is vague in most descriptions but we can, in most cases, think of this as the optimal solution of the scoring problem "at a specific event" is invariant to all changes in the observation distribution that leave the probability of that event unchanged. All binary scores are local because the probability assigned to the event that did not occur is determined so there is no degree of flexibility to vary over. Affine functions of the logarithmic scoring rule are the only strictly proper local scoring rules on afinite setthat is not binary. The expectation value of a proper scoring ruleS{\displaystyle S}can be decomposed into the sum of three components, calleduncertainty,reliability, andresolution,[27][28]which characterize different attributes of probabilistic forecasts: If a score is proper and negatively oriented (such as the Brier Score), all three terms are positive definite. The uncertainty component is equal to the expected score of the forecast which constantly predicts the average event frequency. The reliability component penalizes poorly calibrated forecasts, in which the predicted probabilities do not coincide with the event frequencies. The equations for the individual components depend on the particular scoring rule. For the Brier Score, they are given by wherex¯{\displaystyle {\bar {x}}}is the average probability of occurrence of the binary eventx{\displaystyle x}, andπ(p){\displaystyle \pi (p)}is the conditional event probability, givenp{\displaystyle p}, i.e.π(p)=P(x=1∣p){\displaystyle \pi (p)=P(x=1\mid p)}
https://en.wikipedia.org/wiki/Scoring_function
Afat binary(ormultiarchitecture binary) is a computerexecutable programorlibrarywhich has been expanded (or "fattened") with code native to multipleinstruction setswhich can consequently be run on multiple processor types.[1]This results in a file larger than a normal one-architecture binary file, thus the name. The usual method of implementation is to include a version of themachine codefor each instruction set, preceded by a singleentry pointwith code compatible with all operating systems, which executes a jump to the appropriate section. Alternative implementations store different executables in differentforks, each with its own entry point that is directly used by the operating system. The use of fat binaries is not common inoperating systemsoftware; there are several alternatives to solve the same problem, such as the use of aninstallerprogram to choose an architecture-specific binary at install time (such as withAndroidmultipleAPKs), selecting an architecture-specific binary at runtime (such as withPlan 9'sunion directoriesandGNUstep's fat bundles),[2][3]distributing software insource codeform andcompilingit in-place, or the use of avirtual machine(such as withJava) andjust-in-time compilation. In 1988,Apollo Computer'sDomain/OSSR10.1 introduced a new file type, "cmpexe" (compound executable), that bundled binaries forMotorola 680x0andApollo PRISMexecutables.[4] A fat-binary scheme smoothed theApple Macintosh's transition, beginning in 1994, from68kmicroprocessors toPowerPCmicroprocessors. Many applications for the old platform ran transparently on the new platform under an evolvingemulation scheme, but emulated code generally runs slower than native code. Applications released as "fat binaries" took up more storage space, but they ran at full speed on either platform. This was achieved by packaging both a68000-compiled version and a PowerPC-compiled version of the same program into their executable files.[5][6]The older 68K code (CFM-68K or classic 68K) continued to be stored in theresource fork, while the newer PowerPC code was contained in thedata fork, inPEFformat.[7][8][9] Fat binaries were larger than programs supporting only the PowerPC or 68k, which led to the creation of a number of utilities that would strip out the unneeded version.[5][6]In the era of smallhard drives, when 80 MB hard drives were a common size, these utilities were sometimes useful, as program code was generally a large percentage of overall drive usage, and stripping the unneeded members of a fat binary would free up a significant amount of space on a hard drive. Fat binaries were a feature ofNeXT'sNeXTSTEP/OPENSTEPoperating system, starting with NeXTSTEP 3.1. In NeXTSTEP, they were called "Multi-Architecture Binaries". Multi-Architecture Binaries were originally intended to allow software to be compiled to run both on NeXT's Motorola 68k-based hardware and on IntelIA-32-basedPCsrunning NeXTSTEP, with a single binary file for both platforms.[10]It was later used to allow OPENSTEP applications to run on PCs and the variousRISCplatforms OPENSTEP supported. Multi-Architecture Binary files are in a special archive format, in which a single file stores one or moreMach-Osubfiles for each architecture supported by the Multi-Architecture Binary. Every Multi-Architecture Binary starts with a structure (struct fat_header) containing two unsigned integers. The first integer ("magic") is used as amagic numberto identify this file as a Fat Binary. The second integer (nfat_arch) defines how many Mach-O Files the archive contains (how many instances of the same program for different architectures). After this header, there arenfat_archnumber of fat_arch structures (struct fat_arch). This structure defines the offset (from the start of the file) at which to find the file, the alignment, the size and the CPU type and subtype which the Mach-O binary (within the archive) is targeted at. The version of theGNU Compiler Collectionshipped with the Developer Tools was able tocross-compilesource code for the different architectures on whichNeXTStepwas able to run. For example, it was possible to choose the target architectures with multiple '-arch' options (with the architecture as argument). This was a convenient way to distribute a program for NeXTStep running on different architectures. It was also possible to create libraries (e.g. using NeXTStep'slibtool) with different targeted object files. Apple Computer acquired NeXT in 1996 and continued to work with the OPENSTEP code. Mach-O became the native object file format in Apple's freeDarwin operating system(2000) and Apple'sMac OS X(2001), and NeXT's Multi-Architecture Binaries continued to be supported by the operating system. Under Mac OS X, Multi-Architecture Binaries can be used to support multiple variants of an architecture, for instance to have different versions of32-bitcode optimized for thePowerPC G3,PowerPC G4, andPowerPC 970generations of processors. It can also be used to support multiple architectures, such as 32-bit and64-bitPowerPC, or PowerPC andx86, orx86-64andARM64.[11] In 2005, Apple announced anothertransition, from PowerPC processors to Intel x86 processors. Apple promoted the distribution of new applications that support both PowerPC and x86 natively by using executable files in Multi-Architecture Binary format.[12]Apple calls such programs "Universal applications" and calls the file format "Universal binary" as perhaps a way to distinguish this new transition from the previous transition, or other uses of Multi-Architecture Binary format. Universal binary format was not necessary for forward migration of pre-existing native PowerPC applications; from 2006 to 2011, Apple suppliedRosetta, a PowerPC (PPC)-to-x86dynamic binary translator, to play this role. However, Rosetta had a fairly steep performance overhead, so developers were encouraged to offer both PPC and Intel binaries, using Universal binaries. The obvious cost of Universal binary is that every installed executable file is larger, but in the years since the release of the PPC, hard-drive space has greatly outstripped executable size; while a Universal binary might be double the size of a single-platform version of the same application, free-space resources generally dwarf the code size, which becomes a minor issue. In fact, often a Universal-binary application will be smaller than two single-architecture applications because program resources can be shared rather than duplicated. If not all of the architectures are required, thelipoanddittocommand-line applications can be used to remove versions from the Multi-Architecture Binary image, thereby creating what is sometimes called athin binary. In addition, Multi-Architecture Binary executables can contain code for both 32-bit and 64-bit versions of PowerPC and x86, allowing applications to be shipped in a form that supports 32-bit processors but that makes use of the larger address space and wider data paths when run on 64-bit processors. In versions of theXcodedevelopment environment from 2.1 through 3.2 (running onMac OS X 10.4throughMac OS X 10.6), Apple included utilities which allowed applications to be targeted for both Intel and PowerPC architecture; universal binaries could eventually contain up to four versions of the executable code (32-bit PowerPC, 32-bit x86, 64-bit PowerPC, and64-bit x86). However, PowerPC support was removed from Xcode 4.0 and is therefore not available to developers runningMac OS X 10.7or greater. In 2020, Apple announced anothertransition, this time from Intel x86 processors toApple silicon(ARM64 architecture). To smooth the transition Apple added support for theUniversal 2 binaryformat; Universal 2 binary files are Multi-Architecture Binary files containing both x86-64 and ARM64 executable code, allowing the binary to run natively on both 64-bit Intel and 64-bit Apple silicon. Additionally, Apple introducedRosetta 2dynamic binary translation for x86 to Arm64 instruction set to allow users to run applications that do not have Universal binary variants. In 2006, Apple switched fromPowerPCtoIntelCPUs, and replacedOpen FirmwarewithEFI. However, by 2008, some of their Macs used 32-bit EFI and some used 64-bit EFI. For this reason, Apple extended the EFI specification with "fat" binaries that contained both 32-bit and 64-bit EFI binaries.[13] CP/M-80,MP/M-80,Concurrent CP/M,CP/M Plus,Personal CP/M-80,SCPandMSX-DOSexecutables for theIntel 8080(andZilogZ80) processor families use the same.COMfile extensionasDOS-compatible operating systems forIntel 8086binaries.[nb 1]In both cases programs are loaded at offset +100h and executed by jumping to the first byte in the file.[14][15]As theopcodesof the two processor families are not compatible, attempting to start a program under the wrong operating system leads to incorrect and unpredictable behaviour. In order to avoid this, some methods have been devised to build fat binaries which contain both a CP/M-80 and a DOS program, preceded by initial code which is interpreted correctly on both platforms.[15]The methods either combine two fully functional programs each built for their corresponding environment, or addstubswhich cause the program toexit gracefullyif started on the wrong processor. For this to work, the first few instructions (sometimes also calledgadget headers[16]) in the .COM file have to be valid code for both 8086 and 8080 processors, which would cause the processors to branch into different locations within the code.[16]For example, the utilities in Simeon Cran's emulator MyZ80 start with the opcode sequenceEBh, 52h, EBh.[17][18]An 8086 sees this as a jump and reads its next instruction from offset +154h whereas an 8080 or compatible processor goes straight through and reads its next instruction from +103h. A similar sequence used for this purpose isEBh, 03h, C3h.[19][20]John C. Elliott's FATBIN[21][22][23]is a utility to combine a CP/M-80 and a DOS .COM file into one executable.[17][24]His derivative of the originalPMsfxmodifies archives created by Yoshihiko Mino'sPMarcto beself-extractableunderboth, CP/M-80 and DOS, starting withEBh, 18h, 2Dh, 70h, 6Dh, 73h, 2Dhto also include the "-pms-" signature for self-extractingPMAarchives,[25][17][24][18]thereby also representing a form ofexecutable ASCII code. Another method to keep a DOS-compatible operating system from erroneously executing .COM programs for CP/M-80 and MSX-DOS machines[15]is to start the 8080 code withC3h, 03h, 01h, which is decoded as a "RET" instruction by x86 processors, thereby gracefully exiting the program,[nb 2]while it will be decoded as "JP 103h" instruction by 8080 processors and simply jump to the next instruction in the program. Similar, the CP/M assembler Z80ASM+ by SLR Systems would display an error message when erroneously run on DOS.[17] SomeCP/M-80 3.0.COM files may have one or moreRSXoverlays attached to them byGENCOM.[26]If so, they start with an extra256-byteheader (onepage). In order to indicate this, the first byte in the header is set tomagic byteC9h, which works both as a signature identifying this type of COM file to the CP/M 3.0executable loader, as well as a "RET" instruction for 8080-compatible processors which leads to a graceful exit if the file is executed under older versions of CP/M-80.[nb 2] C9his never appropriate as the first byte of a program for any x86 processor (it has different meanings for different generations,[nb 3]but is never a meaningful first byte); the executable loader in some versions of DOS rejects COM files that start withC9h, avoiding incorrect operation. Similaroverlappingcode sequences have also been devised for combined Z80/6502,[17]8086/68000[17]or x86/MIPS/ARMbinaries.[16] CP/M-86and DOS do not share a common file extension for executables.[nb 1]Thus, it is not normally possible to confuse executables. However, early versions of DOS had so much in common with CP/M in terms of its architecture that some early DOS programs were developed to share binaries containing executable code. One program known to do this wasWordStar 3.2x, which used identicaloverlay filesin theirportsfor CP/M-86 andMS-DOS,[27]and used dynamically fixed-up code to adapt to the differing calling conventions of these operating systems atruntime.[27] Digital Research'sGSXfor CP/M-86 and DOS also shares binary identical 16-bit drivers.[28] DOSdevice drivers(typically with file extension.SYS) start with a file header whose first four bytes areFFFFFFFFhby convention, although this is not a requirement.[29]This is fixed up dynamically by the operating system when the driverloads(typically in theDOS BIOSwhen it executesDEVICEstatements inCONFIG.SYS). Since DOS does not reject files with a .COM extension to be loaded per DEVICE and does not test for FFFFFFFFh, it is possible to combine a COM program and a device driver into the same file[30][29]by placing a jump instruction to the entry point of the embedded COM program within the first four bytes of the file (three bytes are usually sufficient).[29]If the embedded program and the device driver sections share a common portion of code, or data, it is necessary for the code to deal with being loaded at offset +0100h as a .COM style program, and at +0000h as a device driver.[30]For shared code loaded at the "wrong" offset but not designed to beposition-independent, this requires an internal address fix-up[30]similar to what would otherwise already have been carried out by arelocating loader, except for that in this case it has to be done by the loaded program itself; this is similar to the situation withself-relocating driversbut with the program already loaded at the target location by the operating system's loader. Under DOS, some files, by convention, have file extensions which do not reflect their actual file type.[nb 4]For example,COUNTRY.SYS[31]is not a DOS device driver,[nb 5]but a binaryNLSdatabase file for use with the CONFIG.SYSCOUNTRY directiveand theNLSFUNCdriver.[31]Likewise, thePC DOSandDR-DOSsystem filesIBMBIO.COMandIBMDOS.COMare special binary images loaded bybootstrap loaders, not COM-style programs.[nb 5]Trying to load COUNTRY.SYS with a DEVICE statement or executing IBMBIO.COM or IBMDOS.COM at the command prompt will cause unpredictable results.[nb 4][nb 6] It is sometimes possible to avoid this by utilizing techniques similar to those described above. For example,DR-DOS 7.02and higher incorporate a safety feature developed by Matthias R. Paul:[32]If these files are called inappropriately, tiny embedded stubs will just display some file version information and exit gracefully.[33][32][34][31]Additionally, the message is specifically crafted to follow certain"magic" patternsrecognized by the externalNetWare& DR-DOSVERSIONfile identification utility.[31][32][nb 7] A similar protection feature was the 8080 instructionC7h("RST 0") at the very start of Jay Sage's and Joe Wright'sZ-Systemtype-3 and type-4 "Z3ENV" programs[35][36]as well as "Z3TXT" language overlay files,[37]which would result in awarm boot(instead of a crash) under CP/M-80 if loaded inappropriately.[35][36][37][nb 2] In a distantly similar fashion, many (binary)file formatsby convention include a1Ahbyte (ASCII^Z) near the beginning of the file. Thiscontrol characterwill be interpreted as "soft"end-of-file(EOF) marker when a file is opened in non-binary mode, and thus, under many operating systems (including thePDP-6monitor[38]andRT-11,VMS,TOPS-10,[39]CP/M,[40][41]DOS,[42]and Windows[43]), it prevents "binary garbage" from being displayed when a file is accidentally printed at the console. FatELF[44]was a fat binary implementation forLinuxand otherUnix-likeoperating systems. Technically, a FatELF binary was a concatenation ofELFbinaries with some meta data indicating which binary to use on what architecture.[45]Additionally to the CPU architecture abstraction (byte order,word size,CPUinstruction set, etc.), there is the advantage of binaries with support for multiple kernelABIsand versions. FatELF has several use-cases, according to developers:[44] A proof-of-conceptUbuntu 9.04image is available.[47]As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[citation needed][48][49] Although thePortable Executableformat used by Windows does not allow assigning code to platforms, it is still possible to make a loader program that dispatches based on architecture. This is because desktop versions of Windows on ARM have support for 32-bitx86emulation, making it a useful "universal" machine code target. Fatpack is a loader that demonstrates the concept: it includes a 32-bit x86 program that tries to run the executables packed into its resource sections one by one.[50] When developing Windows 11 ARM64, Microsoft introduced a new way to extend thePortable Executableformat called Arm64X.[51]An Arm64X binary contains all the content that would be in separate x64/Arm64EC and Arm64 binaries, but merged into one more efficient file on disk. Visual C++ toolset has been upgraded to support producing such binaries. And when building Arm64X binaries are technically difficult, developers can build Arm64X pure forwarder DLLs instead.[52] The following approaches are similar to fat binaries in that multiple versions of machine code of the same purpose are provided in the same file. Since 2007, some specialized compilers forheterogeneous platformsproduce code files forparallel executionon multiple types of processors, i.e. the CHI (Cfor Heterogeneous Integration) compiler from theIntelEXOCHI (Exoskeleton Sequencer) development suite extends theOpenMPpragmaconcept formultithreadingto produce fat binaries containing code sections for differentinstruction set architectures(ISAs) from which theruntimeloader can dynamically initiate the parallel execution on multiple available CPU andGPUcores in a heterogeneous system environment.[53][54] Introduced in 2006,Nvidia's parallel computing platformCUDA(Compute Unified Device Architecture) is a software to enable general-purpose computing on GPUs (GPGPU). ItsLLVM-based compilerNVCCcan createELF-based fat binaries containing so calledPTXvirtualassembly(as text) which the CUDA runtime driver can laterjust-in-time compileinto some SASS (Streaming Assembler) binary executable code for the actually present target GPU. The executables can also include so calledCUDA binaries(akacubinfiles) containing dedicated executable code sections for one or more specific GPU architectures from which the CUDA runtime can choose from at load-time.[55][56][57][58][59][60]Fat binaries are also supported byGPGPU-Sim[de], a GPUsimulatorintroduced in 2007 as well.[61][62] Multi2Sim (M2S), anOpenCLheterogeneous system simulator framework (originally only for eitherMIPSor x86 CPUs, but later extended to also supportARMCPUs and GPUs like theAMD/ATIEvergreen&Southern Islandsas well asNvidia Fermi&Keplerfamilies)[63]supports ELF-based fat binaries as well.[64][63] GNU Compiler Collection(GCC) and LLVM do not have a fat binary format, but they do have fatobject filesforlink-time optimization(LTO). Since LTO involves delaying the compilation to link-time, theobject filesmust store theintermediate representation(IR), but on the other hand machine code may need to be stored too (for speed or compatibility). An LTO object containing both IR and machine code is known as afat object.[65] Even in a program orlibraryintended for the sameinstruction set architecture, a programmer may wish to make use of some newer instruction set extensions while keeping compatibility with an older CPU. This can be achieved withfunction multi-versioning(FMV): versions of the same function are written into the program, and a piece of code decides which one to use by detecting the CPU's capabilities (such as throughCPUID).Intel C++ Compiler, GCC, and LLVM all have the ability to automatically generate multi-versioned functions.[66]This is a form ofdynamic dispatchwithout any semantic effects. Many math libraries feature hand-written assembly routines that are automatically chosen according to CPU capability. Examples includeglibc,Intel MKL, andOpenBLAS. In addition, the library loader in glibc supports loading from alternative paths for specific CPU features.[67] A similar, but byte-level granular approach originally devised by Matthias R. Paul and Axel C. Frinke is to let a small self-discarding,relaxingandrelocating loaderembedded into the executable file alongside any number of alternative binary code snippets conditionally build a size- or speed-optimized runtime image of a program or driver necessary to perform (or not perform) a particular function in a particular target environment atload-timethrough a form ofdynamic dead code elimination(DDCE).[68][69][70][71]
https://en.wikipedia.org/wiki/Function_multi-versioning
Modularityis a measure of the structure ofnetworksorgraphswhich measures the strength of division of a network into modules (also called groups, clusters or communities). Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detectingcommunity structurein networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities. Many scientifically important problems can be represented and empirically studied using networks. For example, biological and social patterns, the World Wide Web, metabolic networks, food webs, neural networks and pathological networks are real world problems that can be mathematically represented and topologically studied to reveal some unexpected structural features.[1]Most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the network. For instance, a closely connected social community will imply a faster rate of transmission of information or rumor among them than a loosely connected community. Thus, if a network is represented by a number of individual nodes connected by links which signify a certain degree of interaction between the nodes, communities are defined as groups of densely interconnected nodes that are only sparsely connected with the rest of the network. Hence, it may be imperative to identify the communities in networks since the communities may have quite different properties such as node degree, clustering coefficient, betweenness, centrality,[2]etc., from that of the average network. Modularity is one such measure, which when maximized, leads to the appearance of communities in a given network. Modularity is the fraction of the edges that fall within the given groups minus the expected fraction if edges were distributed at random. The value of the modularity for unweighted and undirected graphs lies in the range[−1/2,1]{\displaystyle [-1/2,1]}.[3]It is positive if the number of edges within groups exceeds the number expected on the basis of chance. For a given division of the network's vertices into some modules, modularity reflects the concentration of edges within modules compared with random distribution of links between all nodes regardless of modules. There are different methods for calculating modularity.[1]In the most common version of the concept, the randomization of the edges is done so as to preserve thedegreeof each vertex. Consider a graph withn{\displaystyle n}nodesandm{\displaystyle m}links (edges) such that the graph can be partitioned into two communities using a membership variables{\displaystyle s}. If a nodev{\displaystyle v}belongs to community 1,sv=1{\displaystyle s_{v}=1}, or ifv{\displaystyle v}belongs to community 2,sv=−1{\displaystyle s_{v}=-1}. Let theadjacency matrixfor the network be represented byA{\displaystyle A}, whereAvw=0{\displaystyle A_{vw}=0}means there's no edge (no interaction) between nodesv{\displaystyle v}andw{\displaystyle w}andAvw=1{\displaystyle A_{vw}=1}means there is an edge between the two. Also for simplicity we consider an undirected network. ThusAvw=Awv{\displaystyle A_{vw}=A_{wv}}. (It is important to note that multiple edges may exist between two nodes, but here we assess the simplest case). ModularityQ{\displaystyle Q}is then defined as the fraction of edges that fall within group 1 or 2, minus the expected number of edges within groups 1 and 2 for a random graph with the same node degree distribution as the given network. The expected number of edges shall be computed using the concept of aconfiguration model.[4]The configuration model is a randomized realization of a particular network. Given a network withn{\displaystyle n}nodes, where each nodev{\displaystyle v}has a node degreekv{\displaystyle k_{v}}, the configuration model cuts each edge into two halves, and then each half edge, called astub, is rewired randomly with any other stub in the network, even allowing self-loops (which occur when a stub is rewired to another stub from the same node) and multiple-edges between the same two nodes. Thus, even though the node degree distribution of the graph remains intact, the configuration model results in a completely random network. Now consider two nodesv{\displaystyle v}andw{\displaystyle w}, with node degreeskv{\displaystyle k_{v}}andkw{\displaystyle k_{w}}respectively, from a randomly rewired network as described above. We calculate the expected number of full edges between these nodes. Let us consider each of thekv{\displaystyle k_{v}}stubs of nodev{\displaystyle v}and create associated indicator variablesIi(v,w){\displaystyle I_{i}^{(v,w)}}for them,i=1,…,kv{\displaystyle i=1,\ldots ,k_{v}}, withIi(v,w)=1{\displaystyle I_{i}^{(v,w)}=1}if thei{\displaystyle i}-th stub happens to connect to one of thekw{\displaystyle k_{w}}stubs of nodew{\displaystyle w}in this particular random graph. If it does not, thenIi(v,w)=0{\displaystyle I_{i}^{(v,w)}=0}. Since thei{\displaystyle i}-th stub of nodev{\displaystyle v}can connect to any of the2m−1{\displaystyle 2m-1}remaining stubs with equal probability (whilem{\displaystyle m}is the number of edges in the original graph), and since there arekw{\displaystyle k_{w}}stubs it can connect to associated with nodew{\displaystyle w}, evidently The total number of full edgesJvw{\displaystyle J_{vw}}betweenv{\displaystyle v}andw{\displaystyle w}is justJvw=∑i=1kvIi(v,w){\displaystyle J_{vw}=\sum _{i=1}^{k_{v}}I_{i}^{(v,w)}}, so the expected value of this quantity is Many texts then make the following approximations, for random networks with a large number of edges. Whenm{\displaystyle m}is large, they drop the subtraction of1{\displaystyle 1}in the denominator above and simply use the approximate expressionkvkw2m{\displaystyle {\frac {k_{v}k_{w}}{2m}}}for the expected number of edges between two nodes. Additionally, in a large random network, the number of self-loops and multi-edges is vanishingly small.[5]Ignoring self-loops and multi-edges allows one to assume that there is at most one edge between any two nodes. In that case,Jvw{\displaystyle J_{vw}}becomes a binary indicator variable, so its expected value is also the probability that it equals1{\displaystyle 1}, which means one can approximate the probability of an edge existing between nodesv{\displaystyle v}andw{\displaystyle w}askvkw2m{\displaystyle {\frac {k_{v}k_{w}}{2m}}}. Hence, the difference between the actual number of edges between nodev{\displaystyle v}andw{\displaystyle w}and the expected number of edges between them is Avw−kvkw2m{\displaystyle A_{vw}-{\frac {k_{v}k_{w}}{2m}}} Summing over all node pairs gives the equation for modularity,Q{\displaystyle Q}.[1] It is important to note thatEq. 3holds good for partitioning into two communities only. Hierarchical partitioning (i.e. partitioning into two communities, then the two sub-communities further partitioned into two smaller sub communities only to maximizeQ) is a possible approach to identify multiple communities in a network. Additionally, (3) can be generalized for partitioning a network intoccommunities.[6] whereeijis the fraction of edges with one end vertices in communityiand the other in communityj: andaiis the fraction of ends of edges that are attached to vertices in communityi: We consider an undirected network with 10 nodes and 12 edges and the following adjacency matrix. The communities in the graph are represented by the red, green and blue node clusters in Fig 1. The optimal community partitions are depicted in Fig 2. An alternative formulation of the modularity, useful particularly in spectral optimization algorithms, is as follows.[1]DefineSvr{\displaystyle S_{vr}}to be1{\displaystyle 1}if vertexv{\displaystyle v}belongs to groupr{\displaystyle r}and0{\displaystyle 0}otherwise. Then and hence whereS{\displaystyle S}is the (non-square) matrix having elementsSv{\displaystyle S_{v}}andB{\displaystyle B}is the so-called modularity matrix, which has elements All rows and columns of the modularity matrix sum to zero, which means that the modularity of an undivided network is also always0{\displaystyle 0}. For networks divided into just two communities, one can alternatively definesv=±1{\displaystyle s_{v}=\pm 1}to indicate the community to which nodev{\displaystyle v}belongs, which then leads to wheres{\displaystyle s}is the column vector with elementssv{\displaystyle s_{v}}.[1] This function has the same form as theHamiltonianof an Isingspin glass, a connection that has been exploited to create simple computer algorithms, for instance usingsimulated annealing, to maximize the modularity. The general form of the modularity for arbitrary numbers of communities is equivalent to a Potts spin glass and similar algorithms can be developed for this case also.[7] Although the method of modularity maximization is motivated by computing a deviation from a null model, this deviation is not computed in a statistically consistent manner.[8]Because of this, the method notoriously finds high-scoring communities in its own null model[9](the configuration model), which by definition cannot be statistically significant. Because of this, the method cannot be used to reliably obtain statistically significant community structure in empirical networks. Modularity compares the number of edges inside a cluster with the expected number of edges that one would find in the cluster if the network were a random network with the same number of nodes and where each node keeps its degree, but edges are otherwise randomly attached. This random null model implicitly assumes that each node can get attached to any other node of the network. This assumption is however unreasonable if the network is very large, as the horizon of a node includes a small part of the network, ignoring most of it. Moreover, this implies that the expected number of edges between two groups of nodes decreases if the size of the network increases. So, if a network is large enough, the expected number of edges between two groups of nodes in modularity's null model may be smaller than one. If this happens, a single edge between the two clusters would be interpreted by modularity as a sign of a strong correlation between the two clusters, and optimizing modularity would lead to the merging of the two clusters, independently of the clusters' features. So, even weakly interconnected complete graphs, which have the highest possible density of internal edges, and represent the best identifiable communities, would be merged by modularity optimization if the network were sufficiently large.[10]For this reason, optimizing modularity in large networks would fail to resolve small communities, even when they are well defined. This bias is inevitable for methods like modularity optimization, which rely on a global null model.[11] There are two main approaches which try to solve the resolution limit within the modularity context: the addition of a resistancerto every node, in the form of aself-loop, which increases (r>0) or decreases (r<0) the aversion of nodes to form communities;[12]or the addition of a parameterγ>0in front of the null-case term in the definition of modularity, which controls the relative importance between internal links of the communities and the null model.[7]Optimizing modularity for values of these parameters in their respective appropriate ranges, it is possible to recover the whole mesoscale of the network, from the macroscale in which all nodes belong to the same community, to the microscale in which every node forms its own community, hence the namemultiresolution methods. However, it has been shown that these methods have limitations when communities are very heterogeneous in size.[13] There are a couple of software tools available that are able to compute clusterings in graphs with good modularity.
https://en.wikipedia.org/wiki/Modularity_(networks)
Systems engineeringis aninterdisciplinaryfield ofengineeringandengineering managementthat focuses on how to design, integrate, and managecomplex systemsover theirlife cycles. At its core, systems engineering utilizessystems thinkingprinciples to organize thisbody of knowledge. The individual outcome of such efforts, anengineered system, can be defined as a combination of components that work insynergyto collectively perform a usefulfunction. Issues such asrequirements engineering,reliability,logistics, coordination of different teams, testing and evaluation, maintainability, and many otherdisciplines, aka"ilities", necessary for successfulsystem design, development,implementation, and ultimate decommission become more difficult when dealing with large or complexprojects. Systems engineering deals with work processes, optimization methods, andrisk managementtools in such projects. It overlaps technical and human-centered disciplines such asindustrial engineering,production systems engineering,process systems engineering,mechanical engineering,manufacturing engineering,production engineering,control engineering,software engineering,electrical engineering,cybernetics,aerospace engineering,organizational studies,civil engineeringandproject management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole. The systems engineering process is a discovery process that is quite unlike amanufacturingprocess. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems. The termsystems engineeringcan be traced back toBell Telephone Laboratoriesin the 1940s.[1]The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline.[2][3] When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly.[4]The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, includingUniversal Systems Language(USL),Unified Modeling Language(UML),Quality function deployment(QFD), andIntegration Definition(IDEF). In 1990, a professional society for systems engineering, theNational Council on Systems Engineering(NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to theInternational Council on Systems Engineering(INCOSE) in 1995.[5]Schools in several countries offer graduate programs in systems engineering, andcontinuing educationoptions are also available for practicing engineers.[6] Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor. The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts. The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy,[13]and the term continues to apply to both the narrower and a broader scope. Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system.Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement."[14]: 10 Consistent with the broader scope of systems engineering, theSystems Engineering Body of Knowledge(SEBoK)[15]has defined three types of systems engineering: Systems engineering focuses on analyzing andelicitingcustomer needs and required functionality early in thedevelopment cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, thesystem lifecycle. This includes fully understanding all of thestakeholdersinvolved. Oliver et al. claim that the systems engineering process can be decomposed into: Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includesassessing available information,defining effectiveness measures, tocreate a behavior model,create a structure model,perform trade-off analysis, andcreate sequential build & test plan.[16] Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include theWaterfall modeland theVEE model(also called the V model).[17] System development often requires contribution from diverse technical disciplines.[18]By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item.[19] This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment.[20][21] The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. TheInternational Space Stationis an example of such a system. The development of smarter controlalgorithms,microprocessor design, andanalysis of environmental systemsalso come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here:[22] Taking aninterdisciplinaryapproach to engineering systems is inherently complex since thebehaviorof and interaction among system components is not always immediatelywell definedor understood. Defining and characterizing suchsystemsand subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users,operators,marketingorganizations, andtechnical specificationsis successfully bridged. [23] The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, providedsystems thinkingis employed at all levels.[24]Besides defense and aerospace, many information and technology-based companies,software developmentfirms, and industries in the field ofelectronics & communicationsrequire systems engineers as part of their team.[25] An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort.[26]At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits.[26]However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering.[27][28] Systems engineering encourages the use ofmodeling and simulationto validate assumptions or theories on systems and the interactions within them.[29][30] Use of methods that allow early detection of possible failures, insafety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology,Jay Wright Forrester'sSystem dynamicsmethod, and theUnified Modeling Language(UML)—all currently being explored, evaluated, and developed to support the engineering decision process. Education in systems engineering is often seen as an extension to the regular engineering courses,[31]reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g.aerospace engineering,civil engineering,electrical engineering,mechanical engineering,manufacturing engineering,industrial engineering,chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as aBSin Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either aMS/MEngorPh.D./EngDdegree. INCOSE, in collaboration with the Systems Engineering Research Center atStevens Institute of Technologymaintains a regularly updated directory of worldwide academic programs at suitably accredited institutions.[6]As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively. Education in systems engineering can be taken assystems-centricordomain-centric: Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer.[32] Systems engineering tools arestrategies, procedures, andtechniquesthat aid in performing systems engineering on aprojectorproduct. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more.[33] There are many definitions of what asystemis in the field of systems engineering. Below are a few authoritative definitions: Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions:[41] Depending on their application, tools are used for various stages of the systems engineering process:[23] Modelsplay important and diverse roles in systems engineering. A model can be defined in several ways, including:[42] Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like afunctional flow block diagramand mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last.[42] The main reason for usingmathematical modelsanddiagramsin trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set ofdifferential equationsdescribing the trajectory of a spacecraft in agravitational field. Ideally, the relationships express causality, not just correlation.[42]Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'.[43][page needed] Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system'sfunctionaland data requirements.[44]Common graphical representations include: A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to createstructuralandbehavioral modelsof the system. Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. Adecision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods. Systems Modeling Language(SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems.[45] Lifecycle Modeling Language(LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages.[46] Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity: Cognitive systems engineering (CSE) is a specific approach to the description and analysis ofhuman-machine systemsorsociotechnical systems.[47]The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use ofartifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to ascognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively.[48][49] Like systems engineering,configuration managementas practiced in thedefenseandaerospace industryis a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing. Control engineeringand its design and implementation ofcontrol systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process. Industrial engineeringis a branch ofengineeringthat concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems. Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design.[50] Interface designand its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known asextensibility.Human-Computer Interaction(HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design ofcommunication protocolsforlocal area networksandwide area networks. Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice. Operations researchsupports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints.[51][52] Performance engineeringis the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limitedsystem capacity. For example, the performance of apacket-switched networkis characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily onstatistics,queueing theory, andprobability theoryfor its tools and processes. Program management(or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering.Project managementis also closely related to both program management and systems engineering. Both includeschedulingas engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or thedependencylinks among tasks and impacts across thesystem lifecycleare systems engineering concerns. Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal. Reliability engineeringis the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated withmaintainability,availability(dependabilityorRAMSpreferred by some), andintegrated logistics support. Reliability engineering is always a critical component of safety engineering, as infailure mode and effects analysis(FMEA) andhazard fault treeanalysis, and ofsecurity engineering. Risk management, the practice of assessing and dealing withriskis one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for thesystem lifecyclethat requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort.[53] The techniques ofsafety engineeringmay be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems. Security engineeringcan be viewed as aninterdisciplinaryfield that integrates thecommunity of practicefor control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties asauthenticationof system users, system targets, and others: people, objects, and processes. From its beginnings,software engineeringhas helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering.
https://en.wikipedia.org/wiki/Systems_engineering
In themathematicaldiscipline ofgraph theory, a3-dimensional matchingis a generalization ofbipartite matching(also known as 2-dimensional matching) to 3-partitehypergraphs, which consist of hyperedges each of which contains 3 vertices (instead of edges containing 2 vertices in a usual graph). 3-dimensional matching, often abbreviated as3DM, is also the name of a well-known computational problem: finding a largest 3-dimensional matching in a given hypergraph. 3DM is one of the first problems that were proved to beNP-hard. LetX,Y, andZbe finite sets, and letTbe a subset ofX×Y×Z. That is,Tconsists of triples (x,y,z) such thatx∈X,y∈Y, andz∈Z. NowM⊆Tis a 3-dimensional matching if the following holds: for any two distinct triples (x1,y1,z1) ∈Mand (x2,y2,z2) ∈M, we havex1≠x2,y1≠y2, andz1≠z2. The figure on the right illustrates 3-dimensional matchings. The setXis marked with red dots,Yis marked with blue dots, andZis marked with green dots. Figure (a) shows the setT(gray areas). Figure (b) shows a 3-dimensional matchingMwith |M| = 2, and Figure (c) shows a 3-dimensional matchingMwith |M| = 3. The matchingMillustrated in Figure (c) is amaximum 3-dimensional matching, i.e., it maximises |M|. The matching illustrated in Figures (b)–(c) aremaximal 3-dimensional matchings, i.e., they cannot be extended by adding more elements fromT. A2-dimensional matchingcan be defined in a completely analogous manner. LetXandYbe finite sets, and letTbe a subset ofX×Y. NowM⊆Tis a 2-dimensional matching if the following holds: for any two distinct pairs (x1,y1) ∈Mand (x2,y2) ∈M, we havex1≠x2andy1≠y2. In the case of 2-dimensional matching, the setTcan be interpreted as the set of edges in abipartite graphG= (X,Y,T); each edge inTconnects a vertex inXto a vertex inY. A 2-dimensional matching is then amatchingin the graphG, that is, a set of pairwise non-adjacent edges. Hence 3-dimensional matchings can be interpreted as a generalization of matchings to hypergraphs: the setsX,Y, andZcontain the vertices, each element ofTis a hyperedge, and the setMconsists of pairwise non-adjacent edges (edges that do not have a common vertex). In case of 2-dimensional matching, we have Y = Z. A 3-dimensional matching is a special case of aset packing: we can interpret each element (x,y,z) ofTas a subset {x,y,z} ofX∪Y∪Z; then a 3-dimensional matchingMconsists of pairwise disjoint subsets. In computational complexity theory,3-dimensional matching (3DM)is the name of the followingdecision problem: given a setTand an integerk, decide whether there exists a 3-dimensional matchingM⊆Twith |M| ≥k. This decision problem is known to beNP-complete; it is one ofKarp's 21 NP-complete problems.[1]It is NP-complete even in the special case thatk= |X| = |Y| = |Z| and when each element is contained in at most 3 sets, i.e., when we want a perfect matching in a 3-regular hypergraph.[1][2][3]In this case, a 3-dimensional matching is not only a set packing, but also anexact cover: the setMcovers each element ofX,Y, andZexactly once.[4]The proof is by reduction from3SAT. Given a 3SAT instance, we construct a 3DM instance as follows:[2][5] There exist polynomial time algorithms for solving 3DM in dense hypergraphs.[6][7] Amaximum 3-dimensional matchingis a largest 3-dimensional matching. In computational complexity theory, this is also the name of the followingoptimization problem: given a setT, find a 3-dimensional matchingM⊆Tthat maximizes|M|. Since the decision problem described above is NP-complete, this optimization problem isNP-hard, and hence it seems that there is no polynomial-time algorithm for finding a maximum 3-dimensional matching. However, there are efficient polynomial-time algorithms for finding a maximumbipartite matching(maximum 2-dimensional matching), for example, theHopcroft–Karp algorithm. There is a very simple polynomial-time 3-approximation algorithm for 3-dimensional matching: find any maximal 3-dimensional matching.[8]Just like a maximal matching is within factor 2 of a maximum matching,[9]a maximal 3-dimensional matching is within factor 3 of a maximum 3-dimensional matching. For any constant ε > 0 there is a polynomial-time (4/3 + ε)-approximation algorithm for 3-dimensional matching.[10] However, attaining better approximation factors is probably hard: the problem isAPX-complete, that is, it is hard toapproximatewithin some constant.[11][12][8] It is NP-hard to achieve an approximation factor of 95/94 for maximum 3-d matching, and an approximation factor of 48/47 for maximum 4-d matching. The hardness remains even when restricted to instances with exactly two occurrences of each element.[13] There are various algorithms for 3-d matching in themassively parallel communicationmodel.[14]
https://en.wikipedia.org/wiki/3-dimensional_matching
Agender symbolis apictogramorglyphused to representsexandgender, for example in biology and medicine, ingenealogy, or in the sociological fields ofgender politics,LGBT subcultureandidentity politics. In his booksMantissa Plantarum(1767) andMantissa Plantarum Altera(1771),Carl Linnaeusregularly used theplanetary symbolsof Mars, Venus and Mercury –♂,♀,☿– for male, female and hermaphroditic (perfect) flowers, respectively.[1]Botanists now use⚥for the last.[2] In genealogy, including kinship in anthropology and pedigrees in animal husbandry, the geometric shapes△or□are used for male and○for female. These are also used onpublic toiletsin some countries. The modern internationalpictogramsused to indicate male and female public toilets,🚹︎and🚺︎, became widely used in the 1960s and 1970s. They are sometimes abstracted to▽for male and△for female.[3] The three standard sex symbols in biology are male♂, female♀and hermaphroditic⚥; originally the symbol for Mercury,☿, was used for the last. These symbols were first used byCarl Linnaeusin 1751 to denote whether flowers were male (stamensonly), female (pistilonly) orperfect flowerswith both pistils and stamens.[1](Most flowering and conifer plant species are hermaphroditic and either bear flowers/cones that themselves are hermaphroditic, or bear both male and female flowers/cones on the same plant.) These symbols are now ubiquitous in biology and medicine to indicate the sex of an individual, for example of a patient.[4][a] Kinship chartsuse a triangle△for male and circle○for female.[6]Pedigree chartspublished in scientific papers use an earlier anthropological convention of a square□for male and a circle○for female.[7] Before a shape distinction was adopted, all individuals had been represented by a circle in Morgan's 1871System of Consanguinity and Affinity of Human Family, where gender is encoded in the abbreviations for the kin relation (e.g. M for 'mother' and F for 'father').[8]W. H. R. Riversdistinguished gender in the words of the language being recorded by writing male kinship terms in all capitals and female kinship terms with normal capitalization. That convention was quite influential for a time, and his convention of prioritizing male kin by placing them to the left and females to the right continues to this day though there have been exceptions, such asMargaret Mead, who placed females to the left.[9] The modern gender symbols used forpublic toilets,🚹︎for male and🚺︎for female, are pictograms created for theBritish Railsystem in the mid-1960s.[10]Before that, local usage had been more variable. For example, schoolhouseouthousesin the 19th-century United States had ventilation holes in their doors that were shaped like a starburst Sun✴or like a crescent Moon☾, respectively, to indicate whether the toilet was for use by boys or girls.[11]The British Rail pictograms – often color-coded blue and red[citation needed]– are now the norm for marking public toilets in much of the world, with the female symbol distinguished by a triangular skirt or dress, and in early years (and sometimes still) the male symbol stylized like atuxedo.[3] These symbols are abstracted to varying degrees in different countries – for example, the circle-and-triangle variants(male) and(female) commonly found onportable toilets, sometimes abstracted further to a triangle△(representing a skirt or dress) for female and an inverted triangle▽(representing a broad-shouldered tuxedo) for male in Lithuania.[3] In elementary schools, the pictograms may be of children rather than of adults, with the girl distinguished by her hair. In themed locations, such as bars and tourist attractions, a thematic image or figurine of a man and woman or boy and girl may be used.[citation needed] In Poland, an inverted triangle▽is used for male while a circle○is used for female.[3] In mainland China, silhouettes of heads in profile may be used as gender pictograms,[citation needed]generally alongside the Chinese characters for male (男) and female (女).[12] Some contemporary designs for restroom signage in public spaces are shifting away from symbols that demonstrate gender as binary as a way to be more inclusive.[13][14] Since the 1970s, variations of gender symbols have been used to expresssexual orientationandgender politics. Two interlocking male symbols⚣are used to representgay menwhile two interlocking female symbols⚢are often used to representlesbians.[15]Two female and two male symbols interlocked representbisexuality, while an interlocked female and male symbol⚤representsheterosexuality.[16] The combined male-female symbol⚥is used to representandrogynepeople;[17]when additionally combined with the female♀and male♂symbols to create the symbol⚧, it indicates gender inclusivity,[citation needed]though it is also used as a transgender symbol.[18][19][17]The male-with-stroke symbol⚦is used fortransgenderpeople.[17] The Mercury symbol☿and combined female/male symbol⚥have both been used to representintersexpeople.[20][16]The alchemical symbol forsublimate of antimony🜬is used to representnon-binarypeople. The neuter symbol⚲is also used to represent non-binary people, especially those who are neutrois or of a neutral gender.[16]A featureless circle⚪︎is also used to represent non-binary people, especially those who are agender or genderless, as well asasexuality.[21][16] Since the 2000s, numerous variants of gender symbols have been introduced in the context ofLGBTculture and politics.[16]Some of these symbols have been adopted intoUnicode(in theMiscellaneous Symbolsblock) beginning with version 4.1 in 2005.
https://en.wikipedia.org/wiki/Gender_symbol
Likelihoodist statisticsorlikelihoodismis an approach tostatisticsthat exclusively or primarily uses thelikelihood function. Likelihoodist statistics is a more minor school than the main approaches ofBayesian statisticsandfrequentist statistics, but has some adherents and applications. The central idea of likelihoodism is thelikelihood principle: data are interpreted asevidence, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider dataonlyas evidence, and do not use it as the basis ofstatistical inference, while others make inferences based on likelihood, but without usingBayesian inferenceorfrequentist inference. Likelihoodism is thuscriticizedfor either not providing a basis for belief or action (if it fails to make inferences), or not satisfying the requirements of these other schools. The likelihood function is also used in Bayesian statistics and frequentist statistics, but they differ in how it is used. Some likelihoodists consider their use of likelihood as an alternative to other approaches, while others consider it complementary and compatible with other approaches; see§ Relation with other theories. While likelihoodism is a distinct approach to statistical inference, it can be related to or contrasted with other theories and methodologies in statistics. Here are some notable connections: While likelihood-based statistics have been widely used and have many advantages, they are not without criticism. Here are some common criticisms of likelihoodist statistics: Likelihoodism as a distinct school dates toEdwards (1972), which gives a systematic treatment of statistics, based on likelihood. This built on significant earlier work; seeDempster (1972)for a contemporary review. While comparing ratios of probabilities dates to early statistics and probability, notablyBayesian inferenceas developed byPierre-Simon Laplacefrom the late 1700s,likelihoodas a distinct concept is due to Ronald Fisher inFisher (1921). Likelihood played an important role in Fisher's statistics, but he developed and used many non-likelihood frequentist techniques as well. His late writings, notablyFisher (1955), emphasize likelihood more strongly, and can be considered a precursor to a systematic theory of likelihoodism. Thelikelihood principlewas proposed in 1962 by several authors, notablyBarnard, Jenkins & Winsten (1962),Birnbaum (1962), andSavage (1962), and followed by thelaw of likelihoodinHacking (1965); these laid the foundation for likelihoodism. SeeLikelihood principle § Historyfor early history. While Edwards's version of likelihoodism considered likelihood as only evidence, which was followed byRoyall (1997), others proposed inference based only on likelihood, notably as extensions of maximum likelihood estimation. Notable isJohn Nelder, who declared inNelder (1999, p. 264): At least once a year I hear someone at a meeting say that there are two modes of inference: frequentist and Bayesian. That this sort of nonsense should be so regularly propagated shows how much we have to do. To begin with there is a flourishing school of likelihood inference, to which I belong. Textbooks that take a likelihoodist approach include the following:Kalbfleisch (1985),Azzalini (1996),Pawitan (2001),Rohde (2014), andHeld & Sabanés Bové (2014). A collection of relevant papers is given byTaper & Lele (2004).
https://en.wikipedia.org/wiki/Likelihoodist_statistics
TheSecure Hash Algorithmsare a family ofcryptographic hash functionspublished by theNational Institute of Standards and Technology(NIST) as aU.S.Federal Information Processing Standard(FIPS), including: The corresponding standards areFIPSPUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS). In the table below,internal statemeans the "internal hash sum" after each compression of a data block. All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by theCMVP(Cryptographic Module Validation Program), a joint program run by the AmericanNational Institute of Standards and Technology(NIST) and the CanadianCommunications Security Establishment(CSE).
https://en.wikipedia.org/wiki/Secure_Hash_Standard
Inmathematics, atubular neighborhoodof asubmanifoldof asmooth manifoldis anopen setaround it resembling thenormal bundle. The idea behind a tubular neighborhood can be explained in a simple example. Consider asmoothcurve in the plane without self-intersections. On each point on thecurvedraw a lineperpendicularto the curve. Unless the curve is straight, these lines will intersect among themselves in a rather complicated fashion. However, if one looks only in a narrow band around the curve, the portions of the lines in that band will not intersect, and will cover the entire band without gaps. This band is a tubular neighborhood. In general, letSbe asubmanifoldof amanifoldM, and letNbe thenormal bundleofSinM. HereSplays the role of the curve andMthe role of the plane containing the curve. Consider the natural map which establishes abijectivecorrespondence between thezero sectionN0{\displaystyle N_{0}}ofNand the submanifoldSofM. An extensionjof this map to the entire normal bundleNwith values inMsuch thatj(N){\displaystyle j(N)}is an open set inMandjis ahomeomorphismbetweenNandj(N){\displaystyle j(N)}is called a tubular neighbourhood. Often one calls the open setT=j(N),{\displaystyle T=j(N),}rather thanjitself, a tubular neighbourhood ofS, it is assumed implicitly that the homeomorphismjmappingNtoTexists. Anormal tubeto asmoothcurve is amanifolddefined as theunionof all discs such that LetS⊆M{\displaystyle S\subseteq M}be smooth manifolds. A tubular neighborhood ofS{\displaystyle S}inM{\displaystyle M}is avector bundleπ:E→S{\displaystyle \pi :E\to S}together with a smooth mapJ:E→M{\displaystyle J:E\to M}such that The normal bundle is a tubular neighborhood and because of the diffeomorphism condition in the second point, all tubular neighborhood have the same dimension, namely (the dimension of the vector bundle considered as a manifold is) that ofM.{\displaystyle M.} Generalizations of smooth manifolds yield generalizations of tubular neighborhoods, such as regular neighborhoods, orspherical fibrationsforPoincaré spaces. These generalizations are used to produce analogs to the normal bundle, or rather to thestable normal bundle, which are replacements for the tangent bundle (which does not admit a direct description for these spaces).
https://en.wikipedia.org/wiki/Tubular_neighborhood
Quantum gravity(QG) is a field oftheoretical physicsthat seeks to describegravityaccording to the principles ofquantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored,[1]such as in the vicinity ofblack holesor similar compact astrophysical objects, as well as in the early stages of the universe moments after theBig Bang.[2] Three of the fourfundamental forcesof nature are described within the framework of quantum mechanics andquantum field theory: theelectromagnetic interaction, thestrong force, and theweak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based onAlbert Einstein'sgeneral theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: thegravitational singularitiesinside black holes, the ad hoc postulation ofdark matter, as well asdark energyand its relation to thecosmological constantare among the current unsolved mysteries regarding gravity,[3]all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to thePlanck length, like those near the center of a black hole,quantum fluctuationsof spacetime are expected to play an important role.[4]Finally, the discrepancies between the predicted value for thevacuum energyand the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude)[5][6]highlight the necessity for a quantum theory of gravity. The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular beingM-theoryandloop quantum gravity.[7]All of these approaches aim to describe the quantum behavior of thegravitational field, which does not necessarily includeunifying all fundamental interactionsinto a single mathematical framework. However, many approaches to quantum gravity, such asstring theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as atheory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories includecausal dynamical triangulation,noncommutative geometry, andtwistor theory.[8] One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near thePlanck scale, around 10−35meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energyparticle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.[n.b. 1][n.b. 2] Thought experimentapproaches have been suggested as a testing tool for quantum gravity theories.[9][10]In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions,[11]even in the absence of lab experiments or physical observations. In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades.[12][13][14][15]This field of study is calledphenomenological quantum gravity. Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature ofspacetime: in the slogan ofJohn Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve."[16]On the other hand, quantum field theory is typically formulated in theflatspacetime used inspecial relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is notrenormalizable.[17]Even in the simpler case where the curvature of spacetime is fixeda priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.[18] It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior ofblack holes, and theorigin of the universe.[1] One major obstacle is that forquantum field theory in curved spacetimewith a fixed metric,bosonic/fermionicoperator fieldssupercommuteforspacelike separated points. (This is a way of imposing aprinciple of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in aquantum superpositionof being spacelike and not spacelike separated.[citation needed] The observation that allfundamental forcesexcept gravity have one or more knownmessenger particlesleads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as thegraviton. These particles act as aforce particlesimilar to thephotonof the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles.[19][20][21][22][23]Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. TheWeinberg–Witten theoremplaces some constraints on theories in whichthe graviton is a composite particle.[24][25]While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly.[26] General relativity, likeelectromagnetism, is aclassical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a correspondingquantum field theory. However, gravity is perturbativelynonrenormalizable.[27][28]For a quantum field theory to be well defined according to this understanding of the subject, it must beasymptotically freeorasymptotically safe. The theory must be characterized by a choice offinitely manyparameters, which could, in principle, be set by experiment. For example, inquantum electrodynamicsthese parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, inperturbation theory,infinitely many independent parameters(counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of therenormalization grouptells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, thenevery oneof the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.[29] It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normalperturbation theoryis not a reliable guide to the renormalizability of the theory, and that there reallyisaUV fixed pointfor gravity. Since this is a question ofnon-perturbativequantum field theory, finding a reliable answer is difficult, pursued in theasymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken bystring theory, where all of the excitations of the string essentially manifest themselves as new symmetries.[30][better source needed] In aneffective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory.[31]Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.[32] By treating general relativity as aneffective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses.[31]Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula.[33][34] A fundamental lesson of general relativity is that there is no fixed spacetime background, as found inNewtonian mechanicsandspecial relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be arelational theory,[35]in which the only physically relevant information is the relationship between different events in spacetime. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory,Minkowski spacetimeis the fixed background of the theory. String theorycan be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise tospace-timein a dynamic way. Although string theory had its origins in the study ofquark confinementand not of quantum gravity, it was soon discovered that the string spectrum contains thegraviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in theAdS/CFTcorrespondence) which is a weak form ofbackground dependence. Loop quantum gravityis the fruit of an effort to formulate abackground-independentquantum theory. Topological quantum field theoryprovided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, includingspin networks.[citation needed] Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as theUnruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with theHamiltonian operatoracting as thegenerator of infinitesimal translationsof quantum states through time.[36]In contrast, general relativitytreats time as a dynamical variablewhich relates directly with matter and moreover requires the Hamiltonian constraint to vanish.[37]Because this variability of time has beenobserved macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. There are a number of proposed quantum gravity theories.[38]Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available.[39][40] The central idea of string theory is to replace the classical concept of apoint particlein quantum field theory with a quantum theory of one-dimensional extended objects: string theory.[41]At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, differentmodesof oscillation of one and the same type of fundamental string appear as particles with different (electricand other)charges. In this way, string theory promises to be aunified descriptionof all particles and interactions.[42]The theory is successful in that one mode will always correspond to agraviton, themessenger particleof gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.[43] In what is called thesecond superstring revolution, it was conjectured that both string theory and a unification of general relativity andsupersymmetryknown assupergravity[44]form part of a hypothesized eleven-dimensional model known asM-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[45][46]As presently understood, however, string theory admits a very large number (10500by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space. The main result of loop quantum gravity is that there is a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, thequantum operatorrepresenting the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory. The quantum state of spacetime is described in the theory by means of a mathematical structure calledspin networks. Spin networks were initially introduced byRoger Penrosein abstract form, and later shown byCarlo RovelliandLee Smolinto derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. The theory is based on the reformulation of general relativity known asAshtekar variables, which represent geometric gravity using mathematical analogues ofelectricandmagnetic fields.[47][48]In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.[49][50][51][52] The dynamics of the theory is today constructed in several versions. One version starts with thecanonical quantizationof general relativity. The analogue of theSchrödinger equationis aWheeler–DeWitt equation, which can be defined within the theory.[53]In the covariant, orspinfoamformulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks. There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified.[54][55]Such theories include: As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field ofphenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.[60] The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement,[61][62]violations ofLorentz invariance, imprints of quantum gravitational effects in thecosmic microwave background(in particular its polarization), and decoherence induced by fluctuations[63][64][65]in thespace-time foam.[66]The latter scenario has been searched for in light fromgamma-ray burstsand both astrophysical and atmosphericneutrinos, placing limits on phenomenological quantum gravity parameters.[67][68][69] ESA'sINTEGRALsatellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48m, or 13 orders of magnitude below the Planck scale.[70][71][better source needed] TheBICEP2 experimentdetected what was initially thought to be primordialB-mode polarizationcaused bygravitational wavesin the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due tointerstellar dustinterference.[72]
https://en.wikipedia.org/wiki/Quantum_gravity
Inbioinformatics, theroot mean square deviation of atomic positions, or simplyroot mean square deviation (RMSD), is the measure of the average distance between the atoms (usually the backbone atoms) ofsuperimposedmolecules.[1]In the study of globular protein conformations, one customarily measures the similarity in three-dimensional structure by the RMSD of theCαatomic coordinates after optimal rigid body superposition. When adynamical systemfluctuates about some well-defined average position, the RMSD from the average over time can be referred to as theRMSForroot mean square fluctuation. The size of this fluctuation can be measured, for example usingMössbauer spectroscopyornuclear magnetic resonance, and can provide important physical information. TheLindemann indexis a method of placing the RMSF in the context of the parameters of the system. A widely used way to compare the structures of biomolecules or solid bodies is to translate and rotate one structure with respect to the other to minimize the RMSD. Coutsias,et al.presented a simple derivation, based onquaternions, for the optimal solid body transformation (rotation-translation) that minimizes the RMSD between two sets of vectors.[2]They proved that the quaternion method is equivalent to the well-knownKabsch algorithm.[3]The solution given by Kabsch is an instance of the solution of thed-dimensional problem, introduced by Hurley and Cattell.[4]Thequaternionsolution to compute the optimal rotation was published in the appendix of a paper of Petitjean.[5]Thisquaternionsolution and the calculation of the optimal isometry in thed-dimensional case were both extended to infinite sets and to the continuous case in the appendix A of another paper of Petitjean.[6] whereδiis the distance between atomiand either a reference structure or the mean position of theNequivalent atoms. This is often calculated for the backbone heavy atomsC,N,O, andCαor sometimes just theCαatoms. Normally a rigid superposition which minimizes the RMSD is performed, and this minimum is returned. Given two sets ofn{\displaystyle n}pointsv{\displaystyle \mathbf {v} }andw{\displaystyle \mathbf {w} }, the RMSD is defined as follows: An RMSD value is expressed in length units. The most commonly used unit instructural biologyis theÅngström(Å) which is equal to 10−10m. Typically RMSD is used as a quantitative measure of similarity between two or more protein structures. For example, theCASPprotein structure predictioncompetition uses RMSD as one of its assessments of how well a submitted structure matches the known, target structure. Thus the lower RMSD, the better the model is in comparison to the target structure. Also some scientists who studyprotein foldingby computer simulations use RMSD as areaction coordinateto quantify where the protein is between the folded state and the unfolded state. The study of RMSD for small organic molecules (commonly calledligandswhen they're binding to macromolecules, such as proteins, is studied) is common in the context ofdocking,[1]as well as in other methods to study theconfigurationof ligands when bound to macromolecules. Note that, for the case of ligands (contrary to proteins, as described above), their structures are most commonly not superimposed prior to the calculation of the RMSD. RMSD is also one of several metrics that have been proposed for quantifying evolutionary similarity between proteins, as well as the quality of sequence alignments.[7][8]
https://en.wikipedia.org/wiki/Root-mean-square_deviation_of_atomic_positions
Schedulingis the process of arranging, controlling and optimizing work and workloads in aproductionprocess ormanufacturingprocess. Scheduling is used to allocate plant and machinery resources, planhuman resources, plan production processes andpurchasematerials. It is an important tool formanufacturingandengineering, where it can have a major impact on the productivity of a process. In manufacturing, the purpose of scheduling is to keep due dates of customers and then minimize the production time and costs, by telling a production facility when to make, with which staff, and on which equipment. Production scheduling aims to maximize the efficiency of the operation, utilize maximum resources available and reduce costs. In some situations, scheduling can involve random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. In this case, the scheduling problems are referred to as "stochastic scheduling". Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process. Companies use backward and forward scheduling to allocate plant and machinery resources, plan human resources, plan production processes and purchase materials. The benefits of production scheduling include: Production scheduling tools greatly outperform older manual scheduling methods. These provide the production scheduler with powerful graphical interfaces which can be used to visually optimize real-time work loads in various stages of production, and pattern recognition allows the software toautomatically create scheduling opportunitieswhich might not be apparent without this view into the data. For example, an airline might wish to minimize the number of airport gates required for its aircraft, in order to reduce costs, and scheduling software can allow the planners to see how this can be done, by analysing time tables, aircraft usage, or the flow of passengers. A key character of scheduling is the productivity, the relation between quantity of inputs and quantity of output. Key concepts here are: Production scheduling can take a significant amount of computing power if there are a large number of tasks. Therefore, a range of short-cut algorithms (heuristics) (a.k.a.dispatchingrules) are used: Batch productionscheduling is the practice of planning and scheduling of batch manufacturing processes. Although scheduling may apply to traditionally continuous processes such as refining,[1][2]it is especially important for batch processes such as those for pharmaceutical active ingredients, biotechnology processes and many specialty chemical processes.[3][4]Batch production scheduling shares some concepts and techniques with finite capacity scheduling which has been applied to many manufacturing problems.[5]
https://en.wikipedia.org/wiki/Scheduling_(production_processes)
Etaoin shrdlu(/ˈɛti.ɔɪnˈʃɜːrdluː/,[1]/ˈeɪtɑːnʃrədˈluː/)[2]is a nonsense phrase that sometimes appeared by accident in print in the days ofhot typepublishing, resulting from a custom oftype-casting machineoperators filling out and discarding lines of type when an error was made. It appeared often enough to become part of newspaper lore – a documentary about the last issue ofThe New York Timescomposed using hot metal (July 2, 1978) was titledFarewell, Etaoin Shrdlu.[3]The phraseetaoin shrdluis listed in theOxford English Dictionaryand in theRandom House Webster's Unabridged Dictionary. The letters in the string are, approximately, the twelve most commonly used letters in the English language; differing sources do give slightly different results but one well-known sequence is ETAOINS RHLDCUM,ordered by their frequency.[4] The letters ontype-casting machinekeyboards (such asLinotypeandIntertype) were arranged by descendingletter frequencyto speed up the mechanical operation of the machine, so lower-casee-t-a-o-i-nands-h-r-d-l-uwere the first two columns on the left side of the keyboard. Each key would cause a brassmatrix(an individual letter mold) from the corresponding slot in a font magazine to drop and be added to a line mold. After a line had been cast, the constituent matrices of its mold were returned to the font magazine. If a mistake was made, the line could theoretically be corrected by hand in the assembler area. However, manipulating the matrices by hand within the partially assembled line was time-consuming and presented the chance of disturbing important adjustments. It was much quicker to fill out the bad line and discard the resulting line of text than it was to redo it properly. To make the line long enough to proceed through the machine, operators would finish it by running a finger down the first columns of the keyboard, which created a pattern that could be easily noticed by proofreaders. Occasionally such a line would be overlooked and make its way into print. The phrase has gained enough notability to appear outside typography, including:
https://en.wikipedia.org/wiki/Etaoin_shrdlu
Hyperconnectivityis a term invented by Canadian social scientistsAnabel Quan-HaaseandBarry Wellman, arising from their studies of person-to-person and person-to-machine communication in networked organizations and networked societies.[1]The term refers to the use of multiple means of communication, such asemail,instant messaging,telephone, face-to-face contact andWeb 2.0information services.[2] Hyperconnectivity is also a trend incomputer networkingin which all things that can or should communicate through the network will communicate through the network. This encompasses person-to-person, person-to-machine and machine-to-machine communication. The trend is fueling large increases in bandwidth demand and changes in communications because of the complexity, diversity and integration of new applications and devices using the network. The communications equipment makerNortelhas recognized hyperconnectivity as a pervasive and growing market condition that is at the core of their business strategy. CEO Mike Zafirovski and other executives have been quoted extensively in the press referring to the hyperconnected era. Apart from network-connected devices such aslandline telephones,mobile phonesand computers, newly-connectable devices range from mobile devices such asPDAs,MP3 players,GPS receiversandcamerasthrough to an ever wider collection of machines including cars[3][4]refrigerators[5][6]and coffee makers,[7]all equipped with embedded wireline or wireless[8]networking capabilities.[9]The IP enablement of all devices is a fundamental limitation of IP version 4, andIPv6is the enabling technology to support massive address explosions. There are other, independent, uses of the term: Some examples to support the existence of this accelerating trend to hyperconnectivity include the following facts and assertions:
https://en.wikipedia.org/wiki/Hyperconnectivity
Inmusic,countingis a system of regularly occurringsoundsthat serve to assist with theperformanceorauditionof music by allowing the easy identification of thebeat. Commonly, this involves verballycountingthe beats in eachmeasureas they occur, whether there be 2 beats, 3 beats, 4 beats, or even 5 beats. In addition to helping to normalize the time taken up by each beat, counting allows easier identification of the beats that are stressed. Counting is most commonly used withrhythm(often to decipher a difficult rhythm) andformand often involvessubdivision. The method involving numbers may be termedcount chant, "to identify it as a unique instructional process."[1] In lieu of simply counting the beats of a measure, other systems can be used which may be more appropriate to the particular piece of music. Depending on thetempo, the divisions of a beat may be vocalized as well (for slower times), or skipping numbers altogether (for faster times). As an alternative to counting, ametronomecan be used to accomplish the same function. Triple meter, such as34, is often counted 1 2 3, whilecompound meter, such as68, is often counted in two and subdivided "One-and-ah-Two-and-ah"[2]but may be articulated as "One-la-lee-Two-la-lee".[2]For each subdivision employed a new syllable is used. For example, sixteenth notes in44are counted 1 e & a 2 e & a 3 e & a 4 e & a, using numbers for the quarter note, "&" for theeighth note, and "e" and "a" for the sixteenth note level. Triplets may be counted "1 tri ple 2 tri ple 3 tri ple 4 tri ple" and sixteenth note triplets "1 la li + la li 2 la li + la li".[3]Quarter note triplets, due to their different rhythmic feel, may be articulated differently as "1 dra git 3 dra git".[3] Rather than numbers or nonsense syllables, a random word may be assigned to a rhythm to clearly count each beat. An example is with a triplet, so that atripletsubdivision is often counted "tri-pl-et".[4]TheKodály Methoduses "Ta" forquarter notesand "Ti-Ti" for eighth notes. For sextuplets simply say triplet twice (seeSextuplet rhythm.png), while quintuplets may be articulated as "un-i-vers-i-ty", or other five-syllable words such as "hip-po-pot-a-mus".[4]In some approaches, "rote-before-note",[5]the fractional definitions of notes are not taught to children until after they are able to perform syllable or phrase-based versions of these rhythms.[6] "However the counting may besyllabized, the important skill is to keep thepulsesteady and thedivisionexact."[2] There are various ways to count rhythm, from simplenumbersto countingsyllablesto beat placement syllables. Here are a few examples. Ultimately,musicianscount using numbers, “ands” andvowelsounds. Downbeats within a measure are called 1, 2, 3… Upbeats are represented with a plus sign and are called “and” (i.e. 1 + 2 +), and further subdivisions receive the sounds “ee” and “uh” (i.e. 1 e + a 2 e + a). Musicians do not agree on what to call triplets: some simply say the word triplet (“trip-a-let”), or another three-syllable word (like pineapple or elephant) with an antepenultimate accent. Some use numbers along with the word triplet (i.e. “1-trip-let”). Still others have devised sounds like “ah-lee” or “la-li” added after the number (i.e. 1-la-li, 2-la-li or 1-tee-duh, 2-tee-duh). Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" in24time would be said, "one and two one and two one and two and one and two and uh one and two ee and one ee and uh two one and two and one and two." 1 e and uh 2 e and uh 3 e and uh 4 e and uh Counts the beat number on the tactus, & on the half beat, and n-e-&-a for four sixteenth notes, n-&-a for a triplet or three eighth notes in compound meter, where n is the beat number.[7] The beat numbers are used for the tactus, te for the half beat, and n-ti-te-ta for four sixteenths. Triplets or three eighth notes in compound meter are n-la-li and six sixteenth notes in compound meter is n-ta-la-ta-li-ta.[7] Counting system using n-ne, n-ta-ne-ta, n-na-ni, and n-ta-na-ta-ni-ta. All three systems have internal consistency for all divisions of the beat except the tactus, which changes according to the beat number.[7] Syllables systems are categorized as "Beat Function Systems" - when the tactus (pulse) has certain syllable A, and the half-beat is always certain syllable B, regardless of how the rest of the measure is filled out.[8] The "Galin-Paris-Chevé system" or French "Time-Names system", originally used French words. Toward the middle of the 19th century the American musicianLowell Mason(affectionately named the "Father of Music Education") adapted the French Time-Names system for use in the United States, and instead of using the French names of the notes, he replaced these with a system that identified the value of each note within a meter and the measure.[9] Usual duple meter Usual triple meter Unusual meters pair the duple and triple meter syllables, and employ the "b" consonant. The beat is always called ta. Insimple meters, the division and subdivision are always ta-di and ta-ka-di-mi. Any note value can be the beat, depending on thetime signature. In compound meters (wherein the beat is generally notated withdotted notes), the division and subdivision are always ta-ki-da and ta-va-ki-di-da-ma. Thenote valuedoes not receive a particular name; the note’s position within the beat gets the name. This system allows children to internalize a steady beat and to naturally discover the subdivisions of beat, similar to the down-ee-up-ee system. Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "tadi ta tadi ta tadi tadi tadi tadimi tadi takadi takadimi ta tadi tadi tadi ta." Eighth Rest + Eighth Note = X-Di Eighth Note + Two Sixteenth Notes = Taaa-Di-Mi Two Sixteenth Notes + Eighth Note = Ta-Ka-Diii Three Eighth Notes Beamed Together = Ta-Ki-Da Eighth Note + Eighth Rest + Eighth Note = Ta-X-Da Six Sixteenth Notes = Ta-Va-Ki-Di-Da-Ma Eighth Note + Four Sixteenth Notes = Ta-aa-Ki-Di-Da-Ma Four Sixteenth Notes + Eighth Note = Ta-Va-Ki-Di-Da-aa Two Sixteenth Notes + Eighth Note + Two Sixteenth Notes = Ta-Va-Ki-ii-Da-Ma This is a beat-function system used by some Kodály teachers that was developed by Laurdella Foulkes-Levy, and was designed to be easier to say than Gordon's system or the Takadimi system while still honoring the beat-function. The beat is said as "Ta" in both duple and triple meters, but the beat divisions are performed differently between the two meters. The "t" consonant always falls on the main beat and beat division, and the "k" consonant is always when the beat divides again. Alternating "t" and "k" in quick succession is easy to say, as they fall on two different parts of the tongue, making it very easy to say these syllables at a fast tempo (much like tonguing on recorder or flute). It is also a logical system since it always alternates between the same two consonants. Duple meter Triple meter This system allows the value of each note to be clearly represented no matter its placement within the beat/measure Example Thefolk songlyric"This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "titi ta titi ta titi titi titi ti-tiri titi tiriti tiritiri ta titi titi titi ta" Beats are down, up-beats are up, subdivisions are “ee” but… need more info! Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "down up down down up down down up down up down up down up-ee down up down-ee-up down-ee-up-ee down down up down up down up down." 1 2 3 4, Orff rhythm syllables don't have a specified system. Often, they'll encourage teachers to use whatever they prefer, and many choose to use the Kodaly syllable system.[10]Outside of this, Orff teachers will often use a language-based model in which the rhythms are replaced with a word which matches the number of sounds in the rhythm. For example, two paired eighth notes may become "Jackie" or "Apple." Often, a teacher will stick with a theme and encourage students to create their own words within said theme.[11]Examples include:
https://en.wikipedia.org/wiki/Counting_(music)
State mediaare typically understood as media outlets that are owned, operated, or significantly influenced by the government.[1]They are distinguished frompublic service media, which are designed to serve the public interest, operate independently of government control, and are financed through a combination of public funding, licensing fees, and sometimes advertising. The crucial difference lies in the level of independence from government influence and the commitment to serving a broad public interest rather than the interests of a specific political party or government agenda.[1][2][3] State media serve as tools for public diplomacy and narrative shaping. These media outlets can broadcast via television, radio, print, and increasingly on social media, to convey government viewpoints to domestic and international audiences. The approach to using state media can vary, focusing on positive narratives, adjusting narratives retroactively, or spreading misinformation through sophisticated social media campaigns.[4] State media is also referred to media entities that are administered, funded, managed, or directly controlled by thegovernmentof a country.[5]Three factors that can affect the independence of state media over time are: funding, ownership/governance, and editorial autonomy.[5]These entities can range from being completely state-controlled, where the government has full control over their funding, management, and editorial content, to being independentpublic service media, which, despite receiving government funding, operate with editorial autonomy and are governed by structures designed to protect them from direct political interference.[5] State media is often associated with authoritarian governments that use state media to control, influence, and limit information.[6] Its content, according to some sources, is usually more prescriptive, telling the audience what to think, particularly as it is under no pressure to attract high ratings or generate advertising revenue[7]and therefore may cater to the forces in control of the state as opposed to the forces in control of the corporation, as described in thepropaganda modelof the mass media. In more controlled regions, the state maycensorcontent which it deems illegal, immoral or unfavorable to the government and likewise regulate any programming related to the media; therefore, it is not independent of the governing party.[8]In this type of environment, journalists may be required to be members or affiliated with the ruling party, such as in theEastern Blocformer Socialist States theSoviet Union,ChinaorNorth Korea.[7]Within countries that have high levels ofgovernment interferencein the media, it may use the state press forpropagandapurposes: Additionally, the state-controlled media may only report on legislation after it has already become law to stifle any debate.[9]The media legitimizes its presence by emphasizing "national unity" against domestic or foreign "aggressors".[10]In more open and competitive contexts, the state may control or fund its own outlet and is in competition with opposition-controlled and/or independent media. The state media usually have less government control in more open societies and can provide more balanced coverage than media outside of state control.[11] State media outlets usually enjoy increased funding and subsidies compared to private media counterparts, but this can create inefficiency in the state media.[12]However, in thePeople's Republic of China, where state control of the media is high, levels of funding have been reduced for state outlets, which have forcedChinese Communist Partymedia to sidestep official restrictions on content or publish "soft" editions, such as weekend editions, to generate income.[13] State media can be classified based on their relationship to the state, including factors such as ownership, editorial independence, funding, and political alignment. This framework is commonly used by media watchdogs, and international organizations to assess press freedom, transparency, and the role of media in democratic or authoritarian regimes.[14][15][1] Two contrasting theories of state control of the media exist; the public interest or Pigouvian theory states that government ownership is beneficial, whereas the public choice theory suggests that state control undermineseconomicandpolitical freedoms. Thepublic interest theory, also referred to as the Pigouvian theory,[16][needs update]states that government ownership of media is desirable.[17]Three reasons are offered. Firstly, the dissemination of information is a public good, and to withhold it would be costly even if it is not paid for. Secondly, the cost of the provision and dissemination of information is high, but once costs are incurred, marginal costs for providing the information are low and so are subject to increasing returns.[18]Thirdly, state media ownership can be less biased, more complete and accurate if consumers are ignorant and in addition to private media that would serve the governing classes.[18]However, Pigouvian economists, who advocate regulation andnationalisation, are supportive of free and private media.[19]Public interest theory holds that when operated correctly, government ownership of media is a public good that benefits the nation in question.[20]It contradicts the belief that all state media is propaganda and argues that most states require an unbiased, easily accessible, and reliable stream of information.[20]Public interest theory suggests that the only way to maintain an independent media is to cut it off from any economic needs, therefore a state-run media organization can avoid issues associated with private media companies, namely the prioritization of the profit motive.[21][verification needed]State media can be established as a mean for the state to provide a consistent news outlet while private news companies operate as well. The benefits and detriments of this approach often depend on the editorial independence of the media organization from the government.[22] Many criticisms of public interest theory center on the possibility of true editorial independence from the state.[20]While there is little profit motive, the media organization must be funded by the government instead which can create a dependency on the government's willingness to fund an entity may often be critical of their work.[6]The reliability of a state-run media outlet is often heavily dependent on the reliability of the state to promote a free press, many state-run media outlets in western democracies are capable of providing independent journalism while others in authoritarian regimes become mouthpieces for the state to legitimize their actions.[20] Thepublic choice theoryasserts that state-owned media would manipulate and distort information in favor of the ruling party and entrench its rule and prevent the public from making informed decisions, which undermines democratic institutions.[18]That would prevent private and independent media, which provide alternate voices allowing individuals to choose politicians, goods, services, etc. without fear from functioning. Additionally, that would inhibit competition among media firms that would ensure that consumers usually acquire unbiased, accurate information.[18]Moreover, this competition is part of a checks-and-balances system of ademocracy, known as theFourth Estate, along with thejudiciary,executiveandlegislature.[18]States are dependent on the public for their legitimacy that allows them to operate.[23]The flow of information becomes critical to their survival, and public choice theory argues that states cannot be expected to ignore their own interests, and instead the sources of information must remain as independent from the state as possible.[20]Public choice theory argues that the only way to retain independence in a media organization is to allow the public to seek the best sources of information themselves.[24]This approach is effective at creating a free press that is capable of criticizing government institutions and investigating incidents of government corruption.[20] Those critical of the public choice theory argue that the economic incentives involved in a public business force media organizations to stray from unbiased journalism and towards sensationalist editorials in order to capture public interest.[25]This has become a debate over the effectiveness of media organizations that are reliant on the attention of the public.[25]Sensationalism becomes the key focus and turns away from stories in the public interest in favor of stories that capture the attention of the most people.[24]The focus on sensationalism and public attention can lead to the dissemination of misinformation to appease their consumer base.[24]In these instances, the goal of providing accurate information to the public collapses and instead becomes biased toward a dominant ideology.[24] Both theories have implications regarding the determinants and consequences of ownership of the media.[26]The public interest theory suggests that more benign governments should have higher levels of control of the media which would in turn increasepress freedomas well as economic and political freedoms. Conversely, the public choice theory affirms that the opposite is true - "public spirited", benevolent governments should have less control which would increase these freedoms.[27] Generally, state ownership of the media is found in poor,autocraticnon-democratic countries with highly interventionist governments that have some interest in controlling the flow of information.[28]Countries with "weak" governments do not possess the political will to break up state media monopolies.[29]Media control is also usually consistent with state ownership in theeconomy.[30] As of 2002, the press in most ofEurope(with the exception ofBelarus,RussiaandUkraine) is mostly private and free of state control and ownership, along withNorthand South America (with the exception ofCubaandVenezuela)[31]The press "role" in the national and societal dynamics of theUnited StatesandAustraliahas virtually always been the responsibility of the private commercial sector since these countries' earliest days.[32]Levels of state ownership are higher in someAfricancountries, theMiddle Eastand someAsiancountries (with the exception ofJapan,India,Indonesia,Mongolia,Nepal, thePhilippines,South KoreaandThailandwhere large areas of private press exist.) Full state monopolies exist inChina,Myanmar, andNorth Korea.[31] Issues with state media include complications withpress freedomandjournalistic objectivity. According to Christopher Walker in theJournal of Democracy, "authoritarianortotalitarianmedia outlets" take advantage of both domestic and foreign media due to state censorship in their native countries and the openness of democratic nations to which they broadcast. He cites China'sCCTV, Russia'sRT, and Venezuela'sTeleSURas examples.[33]Surveys find that state-ownedtelevision in Russiais viewed by the Russian public as one of the country's most authoritative and trusted institutions.[34][35] Nations such as Denmark, Norway and Finland that have both the highest degree of freedom of press andpublic broadcastingmedia. Compared to most autocratic nations which attempt to limit press freedom to control the spread of information.[6]A 2003 study found that government ownership of media organizations was associated with worse democratic outcomes.[20] "Worse outcomes" are associated with higher levels of state ownership of the media, which would reject Pigouvian theory.[37]The news media are more independent and fewer journalists are arrested, detained or harassed in countries with less state control.[38]Harassment, imprisonment and higher levels ofinternet censorshipoccur in countries with high levels of state ownership such asSingapore,Belarus,Myanmar,Ethiopia, thePeople's Republic of China,Iran,Syria,TurkmenistanandUzbekistan.[38][39]Countries with a total state monopoly in the media likeNorth KoreaandLaosexperience a "Castro effect", where state control is powerful enough that no journalistic harassment is required in order to restrict press freedom.[38]Historically, state media also existed during theCold Warin authoritarian states such as theSoviet Union,East Germany,Republic of China (Taiwan),Poland,Romania,BrazilandIndonesia. The public interest theory claims state ownership of the press enhancescivil and political rights; whilst under the public choice theory, it curtails them by suppressing public oversight of the government and facilitatingpolitical corruption. High to absolute government control of the media is primarily associated with lower levels of political and civil rights, higher levels of corruption, quality of regulation, security of property andmedia bias.[39][40]State ownership of the press can compromise election monitoring efforts and obscure the integrity of electoral processes.[41]Independent media sees higher oversight by the media of the government. For example, reporting of corruption increased inMexico,GhanaandKenyaafter restrictions were lifted in the 1990s, but government-controlled media defended officials.[42][43]Heavily influenced state media can provide corrupt regimes with a method to combat efforts by protestors.[6]Propaganda spread by state-media organizations can detract from accurate reporting and provide an opportunity for a regime to influence public sentiment.[20]Mass protests against governments considered to be authoritarian, such as those in China, Russia, Egypt, and Iran are often distorted by state-run media organizations in order to defame protesters and provide a positive light on the government's actions.[6][44][45][46] It is common for countries with strict control of newspapers to have fewer firms listed per capita on their markets[47]and less developed banking systems.[48]These findings support the public choice theory, which suggests higher levels of state ownership of the press would be detrimental to economic and financial development.[39]This is due to state media being commonly associated with autocratic regimes where economic freedom is severely restricted and there is a large amount of corruption within the economic and political system.[25]
https://en.wikipedia.org/wiki/State_media
Aservice wrapperis acomputer programthatwrapsarbitrary programs thus enabling them to be installed and run asWindows ServicesorUnix daemons, programs that run in thebackground, rather than under the direct control of a user. They are often automatically started atboottime. Arbitrary programs cannot run as services or daemons, unless they fulfil specific requirements which depend on theoperating system. They also have to be installed in order for the operating system to identify them as such. Various projects exist offering a Java service wrapper, asJavaitself doesn't support creating system services. Some wrappers may add additional functionality to monitor the health of the application or to communicate with it.
https://en.wikipedia.org/wiki/Service_wrapper
Amisdialed callorwrong numberis atelephone callortext messageto an incorrecttelephone number. This may occur because the number has been physically misdialed, the number is simply incorrect, or because thearea codeor ownership of the number has changed. InNorth America,toll-free numbersare a frequent source of wrong numbers because they often have a history of prior ownership.[1]In theUnited Kingdom, many misdialed calls have been due topublic confusionover the dialing codes for some areas.[2][3] Therecipientof a wrong number is usually unknown to thecaller. This aspect has been used insocial scienceexperiments designed to study the willingness of people to help strangers, and the extent to which this is affected by characteristics such asrace. This experimental method is known as the "wrong-number technique".[4] The Emily Post Instituterecommends that the recipient of atext messageto a misdialed number respond with, "Sorry, wrong number."[5]A fraud expert recommends not responding or engaging at all with messages from unknown numbers, to avoidtext message scamsthat start with an innocent-looking message from a stranger.[6] On alandline, wrong numbers incur no toll for the recipient but do represent the annoyance of answering an unwanted call. This may be problematic forshift workerswho are asleep during the day. Oncellphonesin countries where mobile plans charge for incoming calls, a wrong number may cost the subscriber one or more minutes. Sources of misdialled calls are similar to sources oftypographical error: The use of local telephone numbers in mainstream fictional works is problematic as the number will often belong to one or more real subscribers in various otherarea codes. Well-known fictional numbers like867-5309(Jenny's number, from a popular 1982 song) and 776-2323 (God's number in the 2003 cinema release ofBruce Almighty) continue to receive misdialled calls years later. This is often avoided by using reserved or invalid numbers (such as555 (telephone number)), or by displaying a real area code and number which belongs to the publisher of the fictional work. Inadvertent calls toemergency telephone numbersare problematic as, if the dispatcher is uncertain of the nature of a presumed emergency, police are rapidly dispatched to the address on anenhanced 9-1-1or 1-1-2 call.[7]Most often, these calls tie up resources which need to be available for emergency response - especially if the caller is silent or disconnects the call without acknowledging the error. InRaleigh, North Carolina, a 2012 change which forced subscribers to dialthe 919 area codeon local calls caused a 20% increase in total calls to9-1-1, a result of frequent misdials.[8] Misdialled calls are problematic fortoll-free telephone numbersubscribers, who must pay long-distance tolls to receive the calls and pay staff to answer misdirected enquiries. A small, local business whose toll-free number differs in one digit from a large national franchise will typically receive multiple misdialled calls daily. In some cases, commercial rivals have engaged in wilfultyposquattingto profit from misdial traffic. If 1-800-HOLIDAY (+1-800-465-4329) isHoliday Inn, an unscrupulous vendor could register 1-800-H0LIDAY (+1-800-405-4329, the same number with 'o' replaced by 'zero'), resell bookings for rooms in the same or a rival hotel and collect a profitabletravel agent's commission.[9] Some franchise chains have resorted to the defensive registration of complementary numbers, the commonly misdialled variants of their main number, as a defense against the typosquatters. A similar issue exists with toll-free number hoarding. One marketer can create multiple shell companies to operate asRespOrgsand register toll-free numbers as soon as their previous users disconnect service. Callers to millions of these numbers are connected not to the desired party but to an advertisement, such as thePrimeTel Communicationspromotions for costlyphone sexnumbers.[10]By tying up millions of easily remembered numbers, these schemes force businesses to useoverlay planarea codes or numbers withoutmnemonicphonewords. Deployment of multiple toll-free area codes further increases the probability of misdial calls as a common wrong number pattern is to call the +1-800- version of a number which is actually in another area code. In 2009,General Motorsadvertised +1-877-CLUNKER (1-877-258-6537) to promote a $3 billion U.S. federal"cash for clunkers"program. Judy Van Fossan of The Flower Corner inClinton, Illinois, a tiny local shop which owned 1-800-CLUNKER (+1-800-258-6537), was inundated with more than 150 wrong numbers daily.[11]As of 2013, this florist is advertising a local number only;[12]as the vehiclescrappage programended in 2009, the abandoned +1-800/888/877 CLUNKERphonewordsare now typosquatted as Primetel numbers.[13] While prohibitions on hoarding, brokering and warehousing numbers exist both in the US (under FCC regulations) and Australia (under ACMA) enforcement has been non-existent except in the most egregious cases, such as registering distinctive phone-words (like +1-800-RED-CROSS) in order to attempt to sell or lease the numbers to the named organization or its local chapters.[14] In the 2020s,mobile phone spammershave used ostensibly misdirectedtext messagesto attractpig butchering scamvictims. A con artist sends out a large number of apparently innocent messages to a large number of people. When a person responds saying that the message was sent in error, the con artist apologizes, then introduces themselves, and begins along conto gain the text recipient's confidence. Between late 2021 and mid-2022, the FBI estimates that text message scams had cost 244 victims a total of US$42.7 million tocryptocurrency fraud.[15]Some wrong-number text message scammers areforced laborers in Southeast Asia.[16] Misdialled calls serve as a plot device in 1948 filmSorry, Wrong Number, in television such as in theDad's Armyepisode "Sorry, Wrong Number" and in theSeinfeldepisode "The Pool Guy". They are also used on theTouch-Tone Terroristsprank call CD series. They also figure in the name ofHotline Miami 2: Wrong Numbertop-down action game byDennaton Games.
https://en.wikipedia.org/wiki/Misdialed_call#Toll-free_numbers
Aspatial network(sometimes alsogeometric graph) is agraphin which theverticesoredgesarespatial elementsassociated withgeometricobjects, i.e., the nodes are located in a space equipped with a certainmetric.[1][2]The simplest mathematical realization of spatial network is alatticeor arandom geometric graph(see figure in the right), where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if theEuclidean distanceis smaller than a given neighborhood radius.Transportation and mobility networks,Internet,mobile phone networks,power grids,social and contact networksandbiological neural networksare all examples where the underlying space is relevant and where the graph'stopologyalone does not contain all the information. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology. An urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as atransportation network. One might think of the 'space map' as being the negative image of the standard map, with the open space cut out of the background buildings or walls.[3] The following aspects are some of the characteristics to examine a spatial network:[1] In many applications, such as railways, roads, and other transportation networks, the network is assumed to beplanar. Planar networks build up an important group out of the spatial networks, but not all spatial networks are planar. Indeed, the airline passenger networks is a non-planar example: Many large airports in the world are connected through direct flights. There are examples of networks, which seem to be not "directly" embedded in space. Social networks for instance connect individuals through friendship relations. But in this case, space intervenes in the fact that the connection probability between two individuals usually decreases with the distance between them. A spatial network can be represented by aVoronoi diagram, which is a way of dividing space into a number of regions. The dual graph for a Voronoi diagram corresponds to theDelaunay triangulationfor the same set of points. Voronoi tessellations are interesting for spatial networks in the sense that they provide a natural representation model to which one can compare a real world network. Examining the topology of the nodes and edges itself is another way to characterize networks. The distribution ofdegreeof the nodes is often considered, regarding the structure of edges it is useful to find theMinimum spanning tree, or the generalization, theSteiner treeand therelative neighborhood graph. In the "real" world, many aspects of networks are not deterministic - randomness plays an important role. For example, new links, representing friendships, in social networks are in a certain manner random. Modelling spatial networks in respect of stochastic operations is consequent. In many cases thespatial Poisson processis used to approximate data sets of processes on spatial networks. Other stochastic aspects of interest are: Another definition of spatial network derives from the theory ofspace syntax. It can be notoriously difficult to decide what a spatial element should be in complex spaces involving large open areas or many interconnected paths. The originators of space syntax, Bill Hillier and Julienne Hanson useaxial linesandconvex spacesas the spatial elements. Loosely, an axial line is the 'longest line of sight and access' through open space, and a convex space the 'maximal convex polygon' that can be drawn in open space. Each of these elements is defined by the geometry of the local boundary in different regions of the space map. Decomposition of a space map into a complete set of intersecting axial lines or overlapping convex spaces produces the axial map or overlapping convex map respectively. Algorithmic definitions of these maps exist, and this allows the mapping from an arbitrary shaped space map to a network amenable to graph mathematics to be carried out in a relatively well defined manner. Axial maps are used to analyseurban networks, where the system generally comprises linear segments, whereas convex maps are more often used to analysebuilding planswhere space patterns are often more convexly articulated, however both convex and axial maps may be used in either situation. Currently, there is a move within the space syntax community to integrate better withgeographic information systems(GIS), and much of thesoftwarethey produce interlinks with commercially available GIS systems. While networks and graphs were already for a long time the subject of many studies inmathematics, physics, mathematical sociology,computer science, spatial networks have been also studied intensively during the 1970s in quantitative geography. Objects of studies in geography are inter alia locations, activities and flows of individuals, but also networks evolving in time and space.[4]Most of the important problems such as the location of nodes of a network, the evolution of transportation networks and their interaction with population and activity density are addressed in these earlier studies. On the other side, many important points still remain unclear, partly because at that time datasets of large networks and larger computer capabilities were lacking. Recently, spatial networks have been the subject of studies inStatistics, to connect probabilities and stochastic processes with networks in the real world.[5]
https://en.wikipedia.org/wiki/Spatial_network
Inprobability theory, theexpected value(also calledexpectation,expectancy,expectation operator,mathematical expectation,mean,expectation value, orfirstmoment) is a generalization of theweighted average. Informally, the expected value is themeanof the possible values arandom variablecan take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined byintegration. In the axiomatic foundation for probability provided bymeasure theory, the expectation is given byLebesgue integration. The expected value of a random variableXis often denoted byE(X),E[X], orEX, withEalso often stylized asE{\displaystyle \mathbb {E} }orE.[1][2][3] The idea of the expected value originated in the middle of the 17th century from the study of the so-calledproblem of points, which seeks to divide the stakesin a fair waybetween two players, who have to end their game before it is properly finished.[4]This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed toBlaise Pascalby French writer and amateur mathematicianChevalier de Méréin 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters toPierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[5] In Dutch mathematicianChristiaan Huygens'book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (seeHuygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of thetheory of probability. In the foreword to his treatise, Huygens wrote: It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs. In the mid-nineteenth century,Pafnuty Chebyshevbecame the first person to think systematically in terms of the expectations ofrandom variables.[6] Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:[7] That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2. More than a hundred years later, in 1814,Pierre-Simon Laplacepublished his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:[8] ... this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantagemathematical hope. The use of the letterEto denote "expected value" goes back toW. A. Whitworthin 1901.[9]The symbol has since become popular for English writers. In German,Estands forErwartungswert, in Spanish foresperanza matemática, and in French forespérance mathématique.[10] When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized asE(upright),E(italic), orE{\displaystyle \mathbb {E} }(inblackboard bold), while a variety of bracket notations (such asE(X),E[X], andEX) are all used. Another popular notation isμX.⟨X⟩,⟨X⟩av, andX¯{\displaystyle {\overline {X}}}are commonly used in physics.[11]M(X)is used in Russian-language literature. As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuousprobability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools ofmeasure theoryandLebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. arandom vectorX. It is defined component by component, asE[X]i= E[Xi]. Similarly, one may define the expected value of arandom matrixXwith componentsXijbyE[X]ij= E[Xij]. Consider a random variableXwith afinitelistx1, ...,xkof possible outcomes, each of which (respectively) has probabilityp1, ...,pkof occurring. The expectation ofXis defined as[12]E⁡[X]=x1p1+x2p2+⋯+xkpk.{\displaystyle \operatorname {E} [X]=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.} Since the probabilities must satisfyp1+ ⋅⋅⋅ +pk= 1, it is natural to interpretE[X]as aweighted averageof thexivalues, with weights given by their probabilitiespi. In the special case that all possible outcomes areequiprobable(that is,p1= ⋅⋅⋅ =pk), the weighted average is given by the standardaverage. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Informally, the expectation of a random variable with acountably infinite setof possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say thatE⁡[X]=∑i=1∞xipi,{\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i},}wherex1,x2, ...are the possible outcomes of the random variableXandp1,p2, ...are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.[13] However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, theRiemann series theoremofmathematical analysisillustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given aboveconverges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands.[14]In the alternative case that the infinite sum does not converge absolutely, one says the random variabledoes not have finite expectation.[14] Now consider a random variableXwhich has aprobability density functiongiven by a functionfon thereal number line. This means that the probability ofXtaking on a value in any givenopen intervalis given by theintegraloffover that interval. The expectation ofXis then given by the integral[15]E⁡[X]=∫−∞∞xf(x)dx.{\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }xf(x)\,dx.}A general and mathematically precise formulation of this definition usesmeasure theoryandLebesgue integration, and the corresponding theory ofabsolutely continuous random variablesis described in the next section. The density functions of many common distributions arepiecewise continuous, and as such the theory is often developed in this restricted setting.[16]For such functions, it is sufficient to only consider the standardRiemann integration. Sometimescontinuous random variablesare defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution ofXis given by theCauchy distributionCauchy(0, π), so thatf(x) = (x2+ π2)−1. It is straightforward to compute in this case that∫abxf(x)dx=∫abxx2+π2dx=12ln⁡b2+π2a2+π2.{\displaystyle \int _{a}^{b}xf(x)\,dx=\int _{a}^{b}{\frac {x}{x^{2}+\pi ^{2}}}\,dx={\frac {1}{2}}\ln {\frac {b^{2}+\pi ^{2}}{a^{2}+\pi ^{2}}}.}The limit of this expression asa→ −∞andb→ ∞does not exist: if the limits are taken so thata= −b, then the limit is zero, while if the constraint2a= −bis taken, then the limit isln(2). To avoid such ambiguities, in mathematical textbooks it is common to require that the given integralconverges absolutely, withE[X]left undefined otherwise.[17]However, measure-theoretic notions as given below can be used to give a systematic definition ofE[X]for more general random variablesX. All definitions of the expected value may be expressed in the language ofmeasure theory. In general, ifXis a real-valuedrandom variabledefined on aprobability space(Ω, Σ, P), then the expected value ofX, denoted byE[X], is defined as theLebesgue integral[18]E⁡[X]=∫ΩXdP.{\displaystyle \operatorname {E} [X]=\int _{\Omega }X\,d\operatorname {P} .}Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral ofXis defined via weighted averages ofapproximationsofXwhich take on finitely many values.[19]Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variableXis said to beabsolutely continuousif any of the following conditions are satisfied: These conditions are all equivalent, although this is nontrivial to establish.[20]In this definition,fis called theprobability density functionofX(relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration,[21]combined with thelaw of the unconscious statistician,[22]it follows thatE⁡[X]≡∫ΩXdP=∫Rxf(x)dx{\displaystyle \operatorname {E} [X]\equiv \int _{\Omega }X\,d\operatorname {P} =\int _{\mathbb {R} }xf(x)\,dx}for any absolutely continuous random variableX. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variableX{\displaystyle X}can also be defined on the graph of itscumulative distribution functionF{\displaystyle F}by a nearby equality of areas. In fact,E⁡[X]=μ{\displaystyle \operatorname {E} [X]=\mu }with a real numberμ{\displaystyle \mu }if and only if the two surfaces in thex{\displaystyle x}-y{\displaystyle y}-plane, described byx≤μ,0≤y≤F(x)orx≥μ,F(x)≤y≤1{\displaystyle x\leq \mu ,\;\,0\leq y\leq F(x)\quad {\text{or}}\quad x\geq \mu ,\;\,F(x)\leq y\leq 1}respectively, have the same finite area, i.e. if∫−∞μF(x)dx=∫μ∞(1−F(x))dx{\displaystyle \int _{-\infty }^{\mu }F(x)\,dx=\int _{\mu }^{\infty }{\big (}1-F(x){\big )}\,dx}and bothimproper Riemann integralsconverge. Finally, this is equivalent to the representationE⁡[X]=∫0∞(1−F(x))dx−∫−∞0F(x)dx,{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }{\bigl (}1-F(x){\bigr )}\,dx-\int _{-\infty }^{0}F(x)\,dx,}also with convergent integrals.[23] Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of±∞. This is intuitive, for example, in the case of theSt. Petersburg paradox, in which one considers a random variable with possible outcomesxi= 2i, with associated probabilitiespi= 2−i, foriranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one hasE⁡[X]=∑i=1∞xipi=2⋅12+4⋅14+8⋅18+16⋅116+⋯=1+1+1+1+⋯.{\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots .}It is natural to say that the expected value equals+∞. There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral.[19]The first fundamental observation is that, whichever of the above definitions are followed, anynonnegativerandom variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as+∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variableX, one defines thepositive and negative partsbyX+= max(X, 0)andX−= −min(X, 0). These are nonnegative random variables, and it can be directly checked thatX=X+−X−. SinceE[X+]andE[X−]are both then defined as either nonnegative numbers or+∞, it is then natural to define:E⁡[X]={E⁡[X+]−E⁡[X−]ifE⁡[X+]<∞andE⁡[X−]<∞;+∞ifE⁡[X+]=∞andE⁡[X−]<∞;−∞ifE⁡[X+]<∞andE⁡[X−]=∞;undefinedifE⁡[X+]=∞andE⁡[X−]=∞.{\displaystyle \operatorname {E} [X]={\begin{cases}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]&{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\+\infty &{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\-\infty &{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty ;\\{\text{undefined}}&{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty .\end{cases}}} According to this definition,E[X]exists and is finite if and only ifE[X+]andE[X−]are both finite. Due to the formula|X| =X++X−, this is the case if and only ifE|X|is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. The following table gives the expected values of some commonly occurringprobability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. The basic properties below (and their names in bold) replicate or follow immediately from those ofLebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality likeX≥0{\displaystyle X\geq 0}is true almost surely, when the probability measure attributes zero-mass to the complementary event{X<0}.{\displaystyle \left\{X<0\right\}.} Concentration inequalitiescontrol the likelihood of a random variable taking on large values.Markov's inequalityis among the best-known and simplest to prove: for anonnegativerandom variableXand any positive numbera, it states that[37]P⁡(X≥a)≤E⁡[X]a.{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.} IfXis any random variable with finite expectation, then Markov's inequality may be applied to the random variable|X−E[X]|2to obtainChebyshev's inequalityP⁡(|X−E[X]|≥a)≤Var⁡[X]a2,{\displaystyle \operatorname {P} (|X-{\text{E}}[X]|\geq a)\leq {\frac {\operatorname {Var} [X]}{a^{2}}},}whereVaris thevariance.[37]These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within twostandard deviationsof the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%.[38]TheKolmogorov inequalityextends the Chebyshev inequality to the context of sums of random variables.[39] The following three inequalities are of fundamental importance in the field ofmathematical analysisand its applications to probability theory. The Hölder and Minkowski inequalities can be extended to generalmeasure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. In general, it is not the case thatE⁡[Xn]→E⁡[X]{\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]}even ifXn→X{\displaystyle X_{n}\to X}pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, letU{\displaystyle U}be a random variable distributed uniformly on[0,1].{\displaystyle [0,1].}Forn≥1,{\displaystyle n\geq 1,}define a sequence of random variablesXn=n⋅1{U∈(0,1n)},{\displaystyle X_{n}=n\cdot \mathbf {1} \left\{U\in \left(0,{\tfrac {1}{n}}\right)\right\},}with1{A}{\displaystyle \mathbf {1} \{A\}}being the indicator function of the eventA.{\displaystyle A.}Then, it follows thatXn→0{\displaystyle X_{n}\to 0}pointwise. But,E⁡[Xn]=n⋅Pr(U∈[0,1n])=n⋅1n=1{\displaystyle \operatorname {E} [X_{n}]=n\cdot \Pr \left(U\in \left[0,{\tfrac {1}{n}}\right]\right)=n\cdot {\tfrac {1}{n}}=1}for eachn.{\displaystyle n.}Hence,limn→∞E⁡[Xn]=1≠0=E⁡[limn→∞Xn].{\displaystyle \lim _{n\to \infty }\operatorname {E} [X_{n}]=1\neq 0=\operatorname {E} \left[\lim _{n\to \infty }X_{n}\right].} Analogously, for general sequence of random variables{Yn:n≥0},{\displaystyle \{Y_{n}:n\geq 0\},}the expected value operator is notσ{\displaystyle \sigma }-additive, i.e.E⁡[∑n=0∞Yn]≠∑n=0∞E⁡[Yn].{\displaystyle \operatorname {E} \left[\sum _{n=0}^{\infty }Y_{n}\right]\neq \sum _{n=0}^{\infty }\operatorname {E} [Y_{n}].} An example is easily obtained by settingY0=X1{\displaystyle Y_{0}=X_{1}}andYn=Xn+1−Xn{\displaystyle Y_{n}=X_{n+1}-X_{n}}forn≥1,{\displaystyle n\geq 1,}whereXn{\displaystyle X_{n}}is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. The probability density functionfX{\displaystyle f_{X}}of a scalar random variableX{\displaystyle X}is related to itscharacteristic functionφX{\displaystyle \varphi _{X}}by the inversion formula:fX(x)=12π∫Re−itxφX(t)dt.{\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.} For the expected value ofg(X){\displaystyle g(X)}(whereg:R→R{\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}is aBorel function), we can use this inversion formula to obtainE⁡[g(X)]=12π∫Rg(x)[∫Re−itxφX(t)dt]dx.{\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt\right]dx.} IfE⁡[g(X)]{\displaystyle \operatorname {E} [g(X)]}is finite, changing the order of integration, we get, in accordance withFubini–Tonelli theorem,E⁡[g(X)]=12π∫RG(t)φX(t)dt,{\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}whereG(t)=∫Rg(x)e−itxdx{\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}is theFourier transformofg(x).{\displaystyle g(x).}The expression forE⁡[g(X)]{\displaystyle \operatorname {E} [g(X)]}also follows directly from thePlancherel theorem. The expectation of a random variable plays an important role in a variety of contexts. Instatistics, where one seeksestimatesfor unknownparametersbased on available data gained fromsamples, thesample meanserves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in beingunbiased; that is, the expected value of the estimate is equal to thetrue valueof the underlying parameter. For a different example, indecision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of theirutility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of anindicator functionthat is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using thelaw of large numbersto justify estimating probabilities byfrequencies. The expected values of the powers ofXare called themomentsofX; themoments about the meanofXare expected values of powers ofX− E[X]. The moments of some random variables can be used to specify their distributions, via theirmoment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes thearithmetic meanof the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of theresiduals(the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as thesizeof the sample gets larger, thevarianceof this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems ofstatistical estimationandmachine learning, to estimate (probabilistic) quantities of interest viaMonte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g.P⁡(X∈A)=E⁡[1A],{\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}],}where1A{\displaystyle {\mathbf {1} }_{\mathcal {A}}}is the indicator function of the setA.{\displaystyle {\mathcal {A}}.} Inclassical mechanics, thecenter of massis an analogous concept to expectation. For example, supposeXis a discrete random variable with valuesxiand corresponding probabilitiespi.Now consider a weightless rod on which are placed weights, at locationsxialong the rod and having massespi(whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the varianceVar⁡(X)=E⁡[X2]−(E⁡[X])2.{\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.} A very important application of the expectation value is in the field ofquantum mechanics. Theexpectation value of a quantum mechanical operatorA^{\displaystyle {\hat {A}}}operating on aquantum statevector|ψ⟩{\displaystyle |\psi \rangle }is written as⟨A^⟩=⟨ψ|A^|ψ⟩.{\displaystyle \langle {\hat {A}}\rangle =\langle \psi |{\hat {A}}|\psi \rangle .}TheuncertaintyinA^{\displaystyle {\hat {A}}}can be calculated by the formula(ΔA)2=⟨A^2⟩−⟨A^⟩2{\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}.
https://en.wikipedia.org/wiki/Expected_value
In algebraic topology, apresheaf of spectraon atopological spaceXis a contravariant functor from the category of open subsets ofX, where morphisms are inclusions, to thegood category of commutative ring spectra. A theorem of Jardine says that such presheaves form asimplicial model category, whereF→Gis a weak equivalence if the induced map ofhomotopy sheavesπ∗F→π∗G{\displaystyle \pi _{*}F\to \pi _{*}G}is an isomorphism. Asheaf of spectrais then a fibrant/cofibrant object in that category. The notion is used to define, for example, aderived schemein algebraic geometry. Thistopology-relatedarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sheaf_of_spectra
Incomputational complexity theory,P, also known asPTIMEorDTIME(nO(1)), is a fundamentalcomplexity class. It contains alldecision problemsthat can be solved by adeterministic Turing machineusing apolynomialamount ofcomputation time, orpolynomial time. Cobham's thesisholds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a usefulrule of thumb. AlanguageLis in P if and only if there exists a deterministic Turing machineM, such that P can also be viewed as a uniform family ofBoolean circuits. A languageLis in P if and only if there exists apolynomial-time uniformfamily of Boolean circuits{Cn:n∈N}{\displaystyle \{C_{n}:n\in \mathbb {N} \}}, such that The circuit definition can be weakened to use only alogspace uniformfamily without changing the complexity class. P is known to contain many natural problems, including the decision versions oflinear programming, and finding amaximum matching. In 2002, it was shown that the problem of determining if a number isprimeis in P.[1]The related class offunction problemsisFP. Several natural problems are complete for P, includingst-connectivity(orreachability) on alternating graphs.[2]The article onP-complete problemslists further relevant problems in P. A generalization of P isNP, which is the class ofdecision problemsdecidable by anon-deterministic Turing machinethat runs inpolynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is calledco-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset,[3]although this belief (theP⊊NP{\displaystyle {\mathsf {P}}\subsetneq {\mathsf {NP}}}hypothesis)remains unproven. Another open problem is whether NP = co-NP; since P = co-P,[4]a negative answer would implyP⊊NP{\displaystyle {\mathsf {P}}\subsetneq {\mathsf {NP}}}. P is also known to be at least as large asL, the class of problems decidable in alogarithmicamount ofmemory space. A decider usingO(log⁡n){\displaystyle O(\log n)}space cannot use more than2O(log⁡n)=nO(1){\displaystyle 2^{O(\log n)}=n^{O(1)}}time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory byalternating Turing machines. P is also known to be no larger thanPSPACE, the class of problems decidable in polynomial space. PSPACE is equivalent to NPSPACE bySavitch's theorem. Again, whether P = PSPACE is an open problem. To summarize: Here,EXPTIMEis the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known: The most difficult problems in P areP-completeproblems. Another generalization of P isP/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that anadvice stringis given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all ofBPP. If it contains NP, then thepolynomial hierarchycollapses to the second level. On the other hand, it also contains some impractical problems, including someundecidable problemssuch as the unary version of any undecidable problem. In 1999,Jin-Yi Caiand D. Sivakumar, building on work byMitsunori Ogihara, showed that if there exists asparse languagethat is P-complete, then L = P.[5] P is contained inBQP; it is unknown whether this containment is strict. Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P islowfor itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such asrandom access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine. Languages in P are also closed under reversal,intersection,union,concatenation,Kleene closure, inversehomomorphism, andcomplementation.[6] Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, theRobertson–Seymour theoremguarantees that there is a finite list offorbidden minorsthat characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields anonconstructive proofthat there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem. Indescriptive complexity, P can be described as the problems expressible inFO(LFP), thefirst-order logicwith aleast fixed pointoperator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity,[7]Immerman ascribes this result to Vardi[8]and to Immerman.[9] It was published in 2001 that PTIME corresponds to (positive)range concatenation grammars.[10] P can also be defined as an algorithmic complexity class for problems that are not decision problems[11](even though, for example, finding the solution to a2-satisfiabilityinstance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems. Kozen[12]states thatCobhamandEdmondsare "generally credited with the invention of the notion of polynomial time," thoughRabinalso invented the notion independently and around the same time (Rabin's paper[13]was in a 1967 proceedings of a 1966 conference, while Cobham's[14]was in a 1965 proceedings of a 1964 conference and Edmonds's[15]was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading toCobham's thesis. However,H. C. Pocklington, in a 1910 paper,[16][17]analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.
https://en.wikipedia.org/wiki/P_(complexity)
Ininformation systems, atagis akeyword or termassigned to a piece of information (such as anInternet bookmark,multimedia, databaserecord, orcomputer file). This kind ofmetadatahelps describe an item and allows it to be found again by browsing or searching.[1]Tags are generally chosen informally and personally by the item's creator or by its viewer, depending on the system, although they may also be chosen from acontrolled vocabulary.[2]: 68 Tagging was popularized bywebsitesassociated withWeb 2.0and is an important feature of many Web 2.0 services.[2][3]It is now also part of otherdatabase systems,desktop applications, andoperating systems.[4] People use tags to aidclassification, mark ownership, noteboundaries, and indicateonline identity. Tags may take the form of words, images, or other identifying marks. An analogous example of tags in the physical world ismuseumobject tagging. People were using textualkeywordstoclassify informationand objects long before computers. Computer basedsearch algorithmsmade the use of such keywords a rapid way of exploring records. Tagging gained popularity due to the growth ofsocial bookmarking,image sharing, andsocial networkingwebsites.[2]These sites allow users to create and manage labels (or "tags") that categorize content using simple keywords. Websites that include tags often display collections of tags astag clouds,[a]as do some desktop applications.[b]On websites that aggregate the tags of all users, an individual user's tags can be useful both to them and to the larger community of the website's users. Tagging systems have sometimes been classified into two kinds:top-downandbottom-up.[3]: 142[4]: 24Top-downtaxonomiesare created by an authorized group of designers (sometimes in the form of acontrolled vocabulary), whereas bottom-up taxonomies (calledfolksonomies) are created by all users.[3]: 142This definition of "top down" and "bottom up" should not be confused with the distinction between asingle hierarchicaltree structure(in which there is one correct way to classify each item) versusmultiple non-hierarchicalsets(in which there are multiple ways to classify an item); the structure of both top-down and bottom-up taxonomies may be either hierarchical, non-hierarchical, or a combination of both.[3]: 142–143Some researchers and applications have experimented with combining hierarchical and non-hierarchical tagging to aid in information retrieval.[7][8][9]Others are combining top-down and bottom-up tagging,[10]including in some large library catalogs (OPACs) such asWorldCat.[11][12]: 74[13][14] When tags or other taxonomies have further properties (orsemantics) such asrelationshipsandattributes, they constitute anontology.[3]: 56–62 In folder system a file cannot exist in two or more folders so tag system has been thought more convinient. But transitioning to tag system requires awareness of differece between properties of two systems. In foler system the information of classification is put outside of the file and we can change folder at once. In tag system the information of classification is put inside the file so changing its tag means changing the file and it needs to be saved again and takes time. Metadata tags as described in this article should not be confused with the use of the word "tag" in some software to refer to an automatically generatedcross-reference; examples of the latter aretags tablesinEmacs[15]andsmart tagsinMicrosoft Office.[16] The use of keywords as part of an identification and classification system long predates computers.Paper data storagedevices, notablyedge-notched cards, that permitted classification and sorting by multiple criteria were already in use prior to the twentieth century, andfaceted classificationhas been used by libraries since the 1930s. In the late 1970s and early 1980s,Emacs, the text editor forUnixsystems, offered a companion software program calledTagsthat could automatically build a table of cross-references called atags tablethat Emacs could use to jump between afunction calland that function's definition.[17]This use of the word "tag" did not refer to metadata tags, but was an early use of the word "tag" in software to refer to aword index. Online databasesand early websites deployed keyword tags as a way for publishers to help users find content. In the early days of theWorld Wide Web, thekeywordsmeta elementwas used byweb designersto tellweb search engineswhat the web page was about, but these keywords were only visible in a web page'ssource codeand were not modifiable by users. In 1997, the collaborative portal "A Description of the Equator and Some ØtherLands" produced bydocumentaX, Germany, used thefolksonomictermTagfor its co-authors and guest authors on its Upload page.[18]In "The Equator" the termTagfor user-input was described as anabstract literal or keywordto aid the user. However, users defined singularTags, and did not shareTagsat that point. In 2003, thesocial bookmarkingwebsiteDeliciousprovided a way for its users to add "tags" to their bookmarks (as a way to help find them later);[2]: 162Delicious also provided browseable aggregated views of the bookmarks of all users featuring a particular tag.[19]Within a couple of years, thephoto sharingwebsiteFlickrallowed its users to add their own text tags to each of their pictures, constructing flexible and easy metadata that made the pictures highly searchable.[20]The success of Flickr and the influence of Delicious popularized the concept,[21]and othersocial softwarewebsites—such asYouTube,Technorati, andLast.fm—also implemented tagging.[22]In 2005, theAtomweb syndication standard provided a "category" element for inserting subject categories intoweb feeds, and in 2007Tim Brayproposed a "tag"URN.[23] Many systems (and other webcontent management systems) allow authors to add free-form tags to a post, along with (or instead of) placing the post into a predetermined category.[a]For example, a post may display that it has been tagged withbaseballandtickets. Each of those tags is usually aweb linkleading to an index page listing all of the posts associated with that tag. The blog may have a sidebar listing all the tags in use on that blog, with each tag leading to an index page. To reclassify a post, an author edits its list of tags. All connections between posts are automatically tracked and updated by the blog software; there is no need to relocate the page within a complex hierarchy of categories. Somedesktop applicationsandweb applicationsfeature their own tagging systems, such as email tagging inGmailandMozilla Thunderbird,[12]: 73bookmark tagging inFirefox,[24]audio tagging iniTunesorWinamp, and photo tagging in various applications.[25]Some of these applications display collections of tags astag clouds.[b] There are various systems for applying tags to the files in a computer'sfile system. InApple'sMacSystem 7, released in 1991, users could assign one ofseven editable colored labels(with editable names such as "Essential", "Hot", and "In Progress") to each file and folder.[26]In later iterations of the Mac operating system ever sinceOS X 10.9was released in 2013, users could assign multiple arbitrary tags asextended file attributesto any file or folder,[27]and before that time theopen-sourceOpenMeta standard provided similar tagging functionality forMac OS X.[28] Severalsemantic file systemsthat implement tags are available for theLinux kernel, includingTagsistant.[29] Microsoft Windowsallows users to set tags only onMicrosoft Officedocuments and some kinds of picture files.[30] Cross-platformfile tagging standards includeExtensible Metadata Platform(XMP), anISO standardfor embedding metadata into popular image, video and document file formats, such asJPEGandPDF, without breaking their readability by applications that do not support XMP.[31]XMP largely supersedes the earlierIPTC Information Interchange Model.Exifis a standard that specifies the image and audiofile formatsused bydigital cameras, including some metadata tags.[32]TagSpacesis an open-source cross-platform application for tagging files; it inserts tags into thefilename.[33] Anofficial tagis a keyword adopted by events and conferences for participants to use in their web publications, such as blog entries, photos of the event, and presentation slides.[34]Search engines can then index them to make relevant materials related to the event searchable in a uniform way. In this case, the tag is part of acontrolled vocabulary. A researcher may work with a large collection of items (e.g. press quotes, a bibliography, images) in digital form. If he/she wishes to associate each with a small number of themes (e.g. to chapters of a book, or to sub-themes of the overall subject), then a group of tags for these themes can be attached to each of the items in the larger collection.[35]In this way, freeformclassificationallows the author to manage what would otherwise be unwieldy amounts of information.[36] Atriple tagormachine taguses a specialsyntaxto define extrasemanticinformation about the tag, making it easier or more meaningful for interpretation by a computer program.[37]Triple tags comprise three parts: anamespace, apredicate, and a value. For example,geo:long=50.123456is a tag for the geographicallongitudecoordinate whose value is 50.123456. This triple structure is similar to theResource Description Frameworkmodel for information. The triple tag format was first devised for geolicious in November 2004,[38]to mapDeliciousbookmarks, and gained wider acceptance after its adoption by Mappr and GeoBloggers to mapFlickrphotos.[39]In January 2007, Aaron Straup Cope at Flickr introduced the termmachine tagas an alternative name for the triple tag, adding some questions and answers on purpose, syntax, and use.[40] Specialized metadata for geographical identification is known asgeotagging; machine tags are also used for other purposes, such as identifying photos taken at a specific event or naming species usingbinomial nomenclature.[41] A hashtag is a kind of metadata tag marked by the prefix#, sometimes known as a "hash" symbol. This form of tagging is used onmicrobloggingandsocial networking servicessuch asTwitter,Facebook,Google+,VKandInstagram. The hash is used to distinguish tag text, as distinct, from other text in the post. Aknowledge tagis a type ofmeta-informationthat describes or defines some aspect of a piece of information (such as adocument,digital image,database table, orweb page).[42]Knowledge tags are more than traditional non-hierarchicalkeywords or terms; they are a type ofmetadatathat captures knowledge in the form of descriptions, categorizations, classifications,semantics, comments, notes, annotations,hyperdata,hyperlinks, or references that are collected in tag profiles (a kind ofontology).[42]These tag profiles reference an information resource that resides in a distributed, and often heterogeneous, storage repository.[42] Knowledge tags are part of aknowledge managementdiscipline that leveragesEnterprise 2.0methodologies for users to capture insights, expertise, attributes, dependencies, or relationships associated with a data resource.[3]: 251[43]Different kinds of knowledge can be captured in knowledge tags, including factual knowledge (that found in books and data), conceptual knowledge (found in perspectives and concepts), expectational knowledge (needed to make judgments and hypothesis), and methodological knowledge (derived from reasoning and strategies).[43]These forms ofknowledgeoften exist outside the data itself and are derived from personal experience, insight, or expertise. Knowledge tags are considered an expansion of the information itself that adds additional value, context, and meaning to the information. Knowledge tags are valuable for preserving organizational intelligence that is often lost due toturnover, for sharing knowledge stored in the minds of individuals that is typically isolated and unharnessed by the organization, and for connecting knowledge that is often lost or disconnected from an information resource.[44] In a typical tagging system, there is no explicit information about the meaning orsemanticsof each tag, and a user can apply new tags to an item as easily as applying older tags.[2]Hierarchical classification systems can be slow to change, and are rooted in the culture and era that created them; in contrast, the flexibility of tagging allows users to classify their collections of items in the ways that they find useful, but the personalized variety of terms can present challenges when searching and browsing. When users can freely choose tags (creating afolksonomy, as opposed to selecting terms from acontrolled vocabulary), the resulting metadata can includehomonyms(the same tags used with different meanings) andsynonyms(multiple tags for the same concept), which may lead to inappropriate connections between items and inefficient searches for information about a subject.[45]For example, the tag "orange" may refer to thefruitor thecolor, and items related to a version of theLinux kernelmay be tagged "Linux", "kernel", "Penguin", "software", or a variety of other terms. Users can also choose tags that are differentinflectionsof words (such as singular and plural),[46]which can contribute to navigation difficulties if the system does not includestemmingof tags when searching or browsing. Larger-scale folksonomies address some of the problems of tagging, in that users of tagging systems tend to notice the current use of "tag terms" within these systems, and thus use existing tags in order to easily form connections to related items. In this way, folksonomies may collectively develop a partial set of tagging conventions. Despite the apparent lack of control, research has shown that a simple form of shared vocabulary emerges in social bookmarking systems. Collaborative tagging exhibits a form ofcomplex systemsdynamics (orself-organizingdynamics).[47]Thus, even if no central controlled vocabulary constrains the actions of individual users, the distribution of tags converges over time to stablepower lawdistributions.[47]Once such stable distributions form, simplefolksonomicvocabularies can be extracted by examining thecorrelationsthat form between different tags. In addition, research has suggested that it is easier formachine learningalgorithms to learn tag semantics when users tag "verbosely"—when they annotate resources with a wealth of freely associated, descriptive keywords.[48] Tagging systems open to the public are also open to tag spam, in which people apply an excessive number of tags or unrelated tags to an item (such as aYouTubevideo) in order to attract viewers. This abuse can be mitigated using human or statistical identification of spam items.[49]The number of tags allowed may also be limited to reduce spam. Some tagging systems provide a singletext boxto enter tags, so to be able totokenizethe string, aseparatormust be used. Two popular separators are thespace characterand thecomma. To enable the use of separators in the tags, a system may allow for higher-level separators (such asquotation marks) orescape characters. Systems can avoid the use of separators by allowing only one tag to be added to each inputwidgetat a time, although this makes adding multiple tags more time-consuming. A syntax for use withinHTMLis to use therel-tagmicroformatwhich uses therelattributewith value "tag" (i.e.,rel="tag") to indicate that the linked-to page acts as a tag for the current context.[50]
https://en.wikipedia.org/wiki/Knowledge_tagging
pretty Easy privacy(p≡porpEp) was a pluggabledata encryptionand verification system that provided automaticcryptographic keymanagement through a set of libraries for written digital communications. It existed as apluginforMicrosoft Outlook[1]andMozilla Thunderbird[2]as well as a mobile app forAndroid[3][4]and iOS.[5]p≡p also worked underMicrosoft Windows,Unix-likeandMac OS Xoperating systems. Itscryptographicfunctionality was handled by anopen-sourcep≡p engine relying on already existing cryptographic implementations in software likeGnuPG, a modified version of netpgp (used only in iOS), and (as of p≡p v2.0)GNUnet. pretty Easy privacy was first released in 2016.[6]It is afree and open-source software. p≡p was advertised as being easy to install, use, and understand. p≡p did not depend on any specific platform, message transport system (SMS, email,XMPP, etc.), or centrally provided client–server or "cloud" infrastructures; p≡p is fullypeer-to-peerby design.[7] Keys are exchanged opportunistically by transferring via email.[8] Enigmailannounced its support for the new "pretty Easy privacy" (p≡p) encryption in a jointThunderbirdextension to be released in December 2015.[9]Patrick Brunschwig, the head of Enigmail, announced that p≡p core functionality was implemented in Enigmail in October 2016, ready for theMozilla Festivalthen taking place inLondon.[10] In July 2020, Thunderbird 78 dropped support for the Enigmail Add-On.[11]Thunderbird 78 includes OpenPGP functionality and no longer requires the installation of external software.[12] The Internet Society Switzerland Chapter (ISOC-CH) and the Swiss p≡p foundation teamed up[13]to implement privacy-enhancing standards at the basic level of internet protocols, and document them in the work of theInternet Engineering Task Force(IETF). In March 2021, reports surfaced that p≡p had paid for fake reviews for their apps.[14] As of January 2024, the company overseeing p≡p is not operational. Its website no longer functions, and development of the system has ceased.
https://en.wikipedia.org/wiki/Pretty_Easy_privacy
IBM'sAutomatic Language Translatorwas amachine translationsystem that convertedRussiandocuments intoEnglish. It used anoptical discthat stored 170,000 word-for-word and statement-for-statement translations and a custom computer to look them up at high speed. Built for the US Air Force's Foreign Technology Division, theAN/GSQ-16(orXW-2), as it was known to the Air Force, was primarily used to convert Soviet technical documents for distribution to western scientists. The translator was installed in 1959, dramatically upgraded in 1964, and was eventually replaced by amainframerunningSYSTRANin 1970. The translator began in a June 1953 contract from the US Navy to theInternational Telemeter Corporation(ITC) of Los Angeles. This was not for a translation system, but a pure research and development contract for a high-performance photographic online storage medium consisting of small black rectangles embedded in a plastic disk. When the initial contract ran out, what was then theRome Air Development Center(RADC) took up further funding in 1954 and onwards.[1] The system was developed by Gilbert King, chief of engineering at ITC, along with a team that includedLouis Ridenour. It evolved into a 16-inch plastic disk with data recorded as a series of microscopic black rectangles or clear spots. Only the outermost 4 inches of the disk were used for storage, which increased the linear speed of the portion being accessed. When the disk spun at 2,400 RPM it had an access speed of about 1 Mbit/sec. In total, the system stored 30 Mbits, making it the highest density online system of its era.[1][a] In 1954 IBM gave an influential demonstration of machine translation, known today as the "Georgetown–IBM experiment". Run on anIBM 704mainframe, the translation system knew only 250 words of Russian limited to the field of organic chemistry, and only 6 grammar rules for combining them. Nevertheless, the results were extremely promising, and widely reported in the press.[2] At the time, most researchers in the nascent machine translation field felt that the major challenge to providing reasonable translations was building a large library, as storage devices of the era were both too small and too slow to be useful in this role.[3]King felt that the photoscopic store was a natural solution to the problem, and pitched the idea of an automated translation system based on the photostore to the Air Force. RADC proved interested, and provided a research grant in May 1956. At the time, the Air Force also provided a grant to researchers at theUniversity of Washingtonwho were working on the problem of producing an optimal translation dictionary for the project. King advocated a simple word-for-word approach to translations. He thought that the natural redundancies in language would allow even a poor translation to be understood, and that local context was alone enough to provide reasonable guesses when faced with ambiguous terms. He stated that "the success of the human in achieving a probability of .50 in anticipating the words in a sentence is largely due to his experience and the real meanings of the words already discovered."[4]In other words, simply translating the words alone would allow a human to effectively read a document, because they would be able to reason out the proper meaning from the context provided by earlier words. In 1958 King moved to IBM'sThomas J. Watson Research Center, and continued development of the photostore-based translator. Over time, King changed the approach from a pure word-for-word translator to one that stored "stems and endings", which broke words into parts that could be combined back together to form complete words again.[4] The first machine, "Mark I", was demonstrated in July 1959 and consisted of a 65,000 word dictionary and a custom tube-based computer to do the lookups.[3]Texts were hand-copied ontopunched cardsusing custom Cyrillic terminals, and then input into the machine for translation. The results were less than impressive, but were enough to suggest that a larger and faster machine would be a reasonable development. In the meantime, the Mark I was applied to translations of the Soviet newspaper,Pravda. The results continued to be questionable, but King declared it a success, stating inScientific Americanthat the system was "...found, in an operational evaluation, to be quite useful by the Government."[3] On 4 October 1957 theUSSRlaunchedSputnik 1, the first artificial satellite. This caused a wave of concern in the US, whose ownProject Vanguardwas caught flat-footed and then proved to repeatedly fail in spectacular fashion. This embarrassing turn of events led to a huge investment in US science and technology, including the formation ofDARPA,NASAand a variety of intelligence efforts that would attempt to avoid being surprised in this fashion again. After a short period, the intelligence efforts centralized at theWright-Patterson Air Force Baseas the Foreign Technology Division (FTD, now known as theNational Air and Space Intelligence Center), run by the Air Force with input from theDIAand other organizations. FTD was tasked with the translation of Soviet and otherWarsaw Bloctechnical and scientific journals so researchers in the "west" could keep up to date on developments behind theIron Curtain. Most of these documents were publicly available, but FTD also made a number of one-off translations of other materials upon request. Assuming there was a shortage of qualified translators, the FTD became extremely interested in King's efforts at IBM. Funding for an upgraded machine was soon forthcoming, and work began on a "Mark II" system based around a transistorized computer with a faster and higher-capacity 10 inch glass-based optical disc spinning at 2,400 RPM. Another addition was anoptical character readerprovided by the third party, which they hoped would eliminate the time-consuming process of copying the Russian text into machine-readable cards.[3] In 1960 the Washington team also joined IBM, bringing their dictionary efforts with them. The dictionary continued to expand as additional storage was made available, reaching 170,000 words and terms by the time it was installed at the FTD. A major software update was also incorporated in the Mark II, which King referred to as "dictionary stuffing". Stuffing was an attempt to deal with the problems of ambiguous words by "stuffing" prefixes onto them from earlier words in the text.[3]These modified words would match with similarly stuffed words in the dictionary, reducing the number of false positives. In 1962 King left IBM forItek, a military contractor in the process of rapidly acquiring new technologies. Development at IBM continued, and the system went fully operational at FTD in February 1964. The system was demonstrated at the1964 New York World's Fair. The version at the Fair included a 150,000 word dictionary, with about 1/3 of the words in phrases. About 3,500 of these were stored incore memoryto improve performance, and an average speed of 20 words per minute was claimed. The results of the carefully selected input text was quite impressive.[5]After its return to the FTD, it was used continually until 1970, when it was replaced by a machine runningSYSTRAN.[6] In 1964 theUnited States Department of Defensecommissioned the United StatesNational Academy of Sciences(NAS) to prepare a report on the state of machine translation. The NAS formed the "Automatic Language Processing Advisory Committee", orALPAC, and published their findings in 1966. The report,Language and Machines: Computers in Translation and Linguistics, was highly critical of the existing efforts, demonstrating that the systems were no faster than human translations, while also demonstrating that the supposed lack of translators was in fact a surplus, and as a result ofsupply and demandissues, human translation was relatively inexpensive – about $6 per 1,000 words. Worse, the FTD was slower as well; tests using physics papers as input demonstrated that the translator was "10 percent less accurate, 21 percent slower, and had a comprehension level 29 percent lower than when he used human translation."[7] The ALPAC report was as influential as the Georgetown experiment had been a decade earlier; in the immediate aftermath of its publication, the US government suspended almost all funding for machine translation research.[8]Ongoing work at IBM and Itek had ended by 1966, leaving the field to the Europeans, who continued development of systems like SYSTRAN and Logos.
https://en.wikipedia.org/wiki/Automatic_Language_Translator
Inengineering,debuggingis the process of finding theroot cause,workarounds, and possible fixes forbugs. Forsoftware, debugging tactics can involveinteractivedebugging,control flowanalysis,log file analysis, monitoring at theapplicationorsystemlevel,memory dumps, andprofiling. Manyprogramming languagesandsoftware development toolsalso offer programs to aid in debugging, known asdebuggers. The termbug, in the sense of defect, dates back at least to 1878 whenThomas Edisonwrote "little faults and difficulties" in his inventions as "Bugs". A popular story from the 1940s is fromAdmiral Grace Hopper.[1]While she was working on aMark IIcomputer at Harvard University, her associates discovered amothstuck in a relay that impeded operation and wrote in a log book "First actual case of a bug being found". Although probably ajoke, conflating the two meanings of bug (biological and defect), the story indicates that the term was used in the computer field at that time. Similarly, the termdebuggingwas used in aeronautics before entering the world ofcomputers. A letter fromJ. Robert Oppenheimer, director of theWWIIatomic bombManhattan Projectat Los Alamos, used the term in a letter to Dr.Ernest Lawrenceat UC Berkeley, dated October 27, 1944,[2]regarding the recruitment of additional technical staff. TheOxford English Dictionaryentry fordebuguses the termdebuggingin reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" (June 1945 p. 50) refers todebuggingaircraft cameras. The seminal article by Gill[3]in 1951 is the earliest in-depth discussion of programming errors, but it does not use the termbugordebugging. In theACM's digital library, the termdebuggingis first used in three papers from the 1952 ACM National Meetings.[4][5][6]Two of the three use the term in quotation marks. By 1963,debuggingwas a common enough term to be mentioned in passing without explanation on page 1 of theCTSSmanual.[7] As software and electronic systems have become generally more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, and schedulesoftware patchesor full updates to a system. The words "anomaly" and "discrepancy" can be used, as beingmore neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-callederrors,defectsorbugsmust be fixed (at all costs). Instead, animpact assessmentcan be made to determine if changes to remove ananomaly(ordiscrepancy) would be cost-effective for the system, or perhaps a scheduled new release might render the change(s) unnecessary. Not all issues aresafety-criticalormission-criticalin a system. Also, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem(s) (where the "cure would be worse than the disease"). Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zerodefects. Considering the collateral issues, such as the cost-versus-benefit impact assessment, then broader debugging techniques will expand to determine the frequency of anomalies (how often the same "bugs" occur) to help assess their impact to the overall system. Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the complexity of the system, and also depends, to some extent, on theprogramming language(s) used and the available tools, such asdebuggers. Debuggers are software tools which enable theprogrammerto monitor theexecutionof a program, stop it, restart it, setbreakpoints, and change values in memory. The termdebuggercan also refer to the person who is doing the debugging. Generally,high-level programming languages, such asJava, make debugging easier, because they have features such asexception handlingandtype checkingthat make real sources of erratic behaviour easier to spot. In programming languages such asCorassembly, bugs may cause silent problems such asmemory corruption, and it is often difficult to see where the initial problem happened. In those cases,memory debuggertools may be needed. In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form ofstatic code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code, concentrating more on the semantics (e.g. data flow) rather than the syntax, as compilers and interpreters do. Both commercial and free tools exist for various languages; some claim to be able to detect hundreds of different problems. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walk-throughs. A typical example of a problem detected would be a variable dereference that occursbeforethe variable is assigned a value. As another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating likely errors in code that is syntactically correct. But these tools have a reputation of false positives, where correct code is flagged as dubious. The old Unixlintprogram is an early example. For debugging electronic hardware (e.g.,computer hardware) as well as low-level software (e.g.,BIOSes,device drivers) andfirmware, instruments such asoscilloscopes,logic analyzers, orin-circuit emulators(ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-levelsoftwareandfirmware. The debugging process normally begins with identifying the steps to reproduce the problem. This can be a non-trivial task, particularly with parallel processes and someHeisenbugsfor example. The specificuser environmentand usage history can also make it difficult to reproduce the problem. After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make itcrashwhen parsing a large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Simplification may be done manually using adivide-and-conquerapproach, in which the programmer attempts to remove some parts of original test case then checks if the problem still occurs. When debugging in aGUI, the programmer can try skipping some user interaction from the original problem description to check if the remaining actions are sufficient for causing the bug to occur. After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program states (values of variables, plus thecall stack) and track down the origin of the problem(s). Alternatively,tracingcan be used. In simple cases, tracing is just a few print statements which output the values of variables at particular points during the execution of the program.[citation needed] In contrast to the general purpose computer software design environment, a primary characteristic of embedded environments is the sheer number of different platforms available to the developers (CPU architectures, vendors, operating systems, and their variants). Embedded systems are, by definition, not general-purpose designs: they are typically developed for a single task (or small range of tasks), and the platform is chosen specifically to optimize that application. Not only does this fact make life tough for embedded system developers, it also makes debugging and testing of these systems harder as well, since different debugging tools are needed for different platforms. Despite the challenge of heterogeneity mentioned above, some debuggers have been developed commercially as well as research prototypes. Examples of commercial solutions come fromGreen Hills Software,[19]Lauterbach GmbH[20]and Microchip's MPLAB-ICD (for in-circuit debugger). Two examples of research prototype tools are Aveksha[21]and Flocklab.[22]They all leverage a functionality available on low-cost embedded processors, an On-Chip Debug Module (OCDM), whose signals are exposed through a standardJTAG interface. They are benchmarked based on how much change to the application is needed and the rate of events that they can keep up with. In addition to the typical task of identifying bugs in the system, embedded system debugging also seeks to collect information about the operating states of the system that may then be used to analyze the system: to find ways to boost its performance or to optimize other important characteristics (e.g. energy consumption, reliability, real-time response, etc.). Anti-debugging is "the implementation of one or more techniques within computer code that hinders attempts atreverse engineeringor debugging a target process".[23]It is actively used by recognized publishers incopy-protectionschemas, but is also used bymalwareto complicate its detection and elimination.[24]Techniques used in anti-debugging include: An early example of anti-debugging existed in early versions ofMicrosoft Wordwhich, if a debugger was detected, produced a message that said, "The tree of evil bears bitter fruit. Now trashing program disk.", after which it caused the floppy disk drive to emit alarming noises with the intent of scaring the user away from attempting it again.[25][26]
https://en.wikipedia.org/wiki/Debugging
Invasion percolationis a mathematical model of realisticfluid distributionsfor slow immiscible fluid invasion in porous media, inpercolation theory. It "explicitly takes into account the transport process taking place". A wetting fluid such as water takes over from a non-wetting fluid such as oil, and capillary forces are taken into account. It was introduced by Wilkinson and Willemsen (1983).[1] Invasion percolation proceeds inavalanchesor bursts, and thus exhibits a form ofintermittency. This avalanche behavior has been likened toself-organized criticality.[2][3] Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it. This article aboutlatticemodelsis astub. You can help Wikipedia byexpanding it. Thisfluid dynamics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Invasion_percolation
A4G-LTE filteris alow-pass filterornotch filter, to be used interrestrial television(over-the-air/OTA)TV antennas(bothcollectiveand individual), to prevent cellular transmissions frominterferingwith television reception. These filters are usually used for existing facilities, because antennas and amplifiers sold after the new standard was applied may be already be configured to receive, with good signal gain, onlyTV channelsfrom 14 to 51 of the UHF band, the other higher channels (former TV channels 52 to 83) being attenuated. 4G LTEis the fourth generation mobile phone standard. In urban areas, the 4G uses a frequency band located between 1800 MHz and 2600 MHz, and therefore is quite far from the TV band for causing any type of interference problem. In rural areas, however, the major operators asked to use part of the UHF band. Since the UHF frequency band is not expandable, it was agreed that television broadcasting should limit its number of channels. Thus, the frequency band dedicated to TV became between 470 MHz and 700 MHz (channels 14-52), whilst 4G LTE uses the frequency bands between 700 and 900 MHz (former TV channels 52 to 83), resulting in an interval separating the two bands (DTT and 4G) of about 1 MHz, so that there is a risk ofinterference[1]in the areas close to the 4G-LTE transmitting towers.[1]In practice, these bands used 698 MHz to 960 MHz (depending on the carrier). See previous section on Filters. This re-allocation of TV bandwidth to 4G is calledDigital dividend.[2] Digital dividendto frequency 698 to 806 MHz (TV Channels 61 to 69) assigned by the plan for the New UHF Frequency Band distribution agreed inThe World Radio Congress (WRC-07)which identified 108 MHz of Digital Dividend Spectrum from 698 to 806 MHz for ITU-R Regions 2-1 and nine countries in Regions 3-2, including China, India, Japan and Rep. of Korea.[3] This Digital dividend is used to improve the coverage of the 4G-LTE new standard in rural areas, needed with the arrival of 4G-LTE and requires therefore the redistribution of UHF frequency band. Starting from January 2015 (in some countries), the main mobile operators will begin to deploy their networks of very high band width "True 4G" or LTE using the frequency previously attributed to TV Channels 61 to 69, which is known as "digital dividend".[3]
https://en.wikipedia.org/wiki/4G-LTE_filter
Aframe check sequence(FCS) is anerror-detecting codeadded to aframein acommunication protocol. Frames are used to sendpayload datafrom a source to a destination. All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The FCS field contains a number that is calculated by the source node based on the data in the frame. This number is added to the end of a frame that is sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed and the frame is discarded. The FCS provides error detection only. Error recovery must be performed through separate means.Ethernet, for example, specifies that a damaged frame should be discarded and does not specify any action to cause the frame to be retransmitted. Other protocols, notably theTransmission Control Protocol(TCP), can notice the data loss and initiate retransmission and error recovery.[2] The FCS is often transmitted in such a way that the receiver can compute a running sum over the entire frame, together with the trailing FCS, expecting to see a fixed result (such as zero) when it is correct. ForEthernetand otherIEEE 802protocols, the standard states that data is sent least significant bit first, while the FCS is sent most significant bit (bit 31) first. An alternative approach is to generate the bit reversal of the FCS so that the reversed FCS can be also sent least significant bit (bit 0) first. Refer toEthernet frame § Frame check sequencefor more information. By far the most popular FCS algorithm is acyclic redundancy check(CRC), used in Ethernet and other IEEE 802 protocols with 32 bits, inX.25with 16 or 32 bits, inHDLCwith 16 or 32 bits, inFrame Relaywith 16 bits,[3]inPoint-to-Point Protocol(PPP) with 16 or 32 bits, and in otherdata link layerprotocols. Protocols of theInternet protocol suitetend to usechecksums.[4]
https://en.wikipedia.org/wiki/Frame_check_sequence
Google Brainwas adeep learningartificial intelligenceresearch team that served as the sole AI branch of Google before being incorporated under the newer umbrella ofGoogle AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources.[1]It created tools such asTensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects,[2]and aimed to create research opportunities inmachine learningandnatural language processing.[2]It was merged into former Google sister company DeepMind to formGoogle DeepMindin April 2023. The Google Brain project began in 2011 as a part-time research collaboration between Google fellowJeff Deanand Google Researcher Greg Corrado.[3]Google Brain started as aGoogle Xproject and became so successful that it was graduated back to Google:Astro Tellerhas said that Google Brain paid for the entire cost ofGoogle X.[4] In June 2012, theNew York Timesreported that a cluster of 16,000processorsin 1,000computersdedicated to mimicking some aspects ofhuman brain activityhad successfully trained itself to recognize acatbased on 10 million digital images taken fromYouTubevideos.[3]The story was also covered byNational Public Radio.[5] In March 2013, Google hiredGeoffrey Hinton, a leading researcher in thedeep learningfield, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google.[6] In April 2023, Google Brain merged with Google sister company DeepMind to formGoogle DeepMind, as part of the company's continued efforts to accelerate work on AI.[7] Google Brain was initially established by Google FellowJeff Deanand visiting Stanford professorAndrew Ng. In 2014, the team includedJeff Dean,Quoc Le,Ilya Sutskever,Alex Krizhevsky,Samy Bengio, and Vincent Vanhoucke. In 2017, team members included Anelia Angelova,Samy Bengio, Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Chris Olah, Salih Edneer, Benoit Steiner, Vincent Vanhoucke, Vijay Vasudevan, andFernanda Viegas.[8]Chris Lattner, who createdApple's programming languageSwiftand then ranTesla's autonomy team for six months, joined Google Brain's team in August 2017.[9]Lattner left the team in January 2020 and joinedSiFive.[10] As of 2021[update], Google Brain was led byJeff Dean,Geoffrey Hinton, andZoubin Ghahramani. Other members include Katherine Heller, Pi-Chuan Chang, Ian Simon, Jean-Philippe Vert, Nevena Lazic, Anelia Angelova, Lukasz Kaiser, Carrie Jun Cai, Eric Breck, Ruoming Pang, Carlos Riquelme, Hugo Larochelle, and David Ha.[8]Samy Bengioleft the team in April 2021,[11]andZoubin Ghahramanitook on his responsibilities. Google Research includes Google Brain and is based inMountain View, California. It also has satellite groups inAccra,Amsterdam,Atlanta,Beijing,Berlin,Cambridge (Massachusetts),Israel,Los Angeles,London,Montreal,Munich,New York City,Paris,Pittsburgh,Princeton,San Francisco,Seattle,Tokyo,Toronto, andZürich.[12] In October 2016, Google Brain designed an experiment to determine thatneural networksare capable of learning securesymmetric encryption.[13]In this experiment, threeneural networkswere created: Alice, Bob and Eve.[14]Adhering to the idea of agenerative adversarial network(GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not.[14]Alice and Bob maintained an advantage over Eve, in that they shared akeyused forencryptionanddecryption.[13]In doing so, Google Brain demonstrated the capability ofneural networksto learn secureencryption.[13] In February 2017, Google Brain determined aprobabilistic methodfor converting pictures with 8x8resolutionto a resolution of 32x32.[15][16]The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations.[17][18] The proposed software utilizes twoneural networksto make approximations for thepixelmakeup of translated images.[16][19]The first network, known as the "conditioning network," downsizeshigh-resolutionimages to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones.[16]The other network, known as the "prior network," uses the mappings from the previous network to add more detail to the original image.[16]The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images.[16]Google Brain's results indicate the possibility for neural networks to enhance images.[20] The Google Brain team contributed to theGoogle Translateproject by employing a new deep learning system that combines artificial neural networks with vast databases ofmultilingualtexts.[21]In September 2016,Google Neural Machine Translation(GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples.[21]Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence.[22]But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements.[2]Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors.[2][21]The GNMT has also shown significant improvement for notoriously difficult translations, likeChinesetoEnglish.[21] While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop aMultilingualGNMTsystem, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before.[23]Google announced that Google Translate can now also translate without transcribing, using neural networks. This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text. According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text.[24]Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence.[2]This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable.[22] Aiming to improve traditional robotics control algorithms where new skills of a robot need to behand-programmed, robotics researchers at Google Brain are developingmachine learningtechniques to allow robots to learn new skills on their own.[25]They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known ascloud robotics.[26]As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combinerobotics,AI, and thecloudto enable efficient robotic automation through cloud-connected collaborative robots.[26] Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations.[27][28][29][30]For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so.[27]In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints.[29] Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers atXin a research on learning hand-eye coordination for robotic grasping.[31]Their method allowed real-time robot control for grasping novel objects with self-correction.[31]In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos.[30] In 2020, Google Brain Team andUniversity of Lillepresented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words.[32]The model can be altered to choose speech segments in the context ofText-To-SpeechTraining.[32]It can also prevent malicious voice generators from accessing the data.[32] TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.[2]The tool has been used to develop software using deep learning models that farmers use to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images.[2] Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data.[2]TensorFlowwas updated with a suite of tools for users to guide theneural networkto create images and music.[2]However, the team fromValdosta State Universityfound that theAIstruggles to perfectly replicate human intention inartistry, similar to the issues faced intranslation.[2] The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis.[2]During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task.[2]Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot.[2] Thetransformerdeep learning architecture was invented by Google Brain researchers in 2017, and explained in the scientific paperAttention Is All You Need.[33]Google owns apatenton this widely used architecture, but hasn't enforced it.[34][35] Google Brain announced in 2022 that it created two different types oftext-to-image modelscalled Imagen and Parti that compete withOpenAI'sDALL-E.[36][37] Later in 2022, the project was extended to text-to-video.[38] Imagen development was transferred toGoogle Deepmindafter the merger with Deepmind.[39] The Google Brain projects' technology is currently used in various other Google products such as theAndroid Operating System'sspeech recognition system, photo search forGoogle Photos, smart reply inGmail, and video recommendations inYouTube.[40][41][42] Google Brain has received coverage inWired,[43][44][45]NPR,[5]andBig Think.[46]These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications.[43][5][46] In December 2020, AI ethicistTimnit Gebruleft Google.[47]While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" and a related ultimatum she made, setting conditions to be met otherwise she would leave.[47]This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public.[47][48]The request to retract the paper was made by Megan Kacholia, vice president of Google Brain.[49]As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of "research censorship" and condemning Gebru's treatment at the company.[50] In February 2021, Google fired one of the leaders of the company's AI ethics team,Margaret Mitchell.[49]The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru.[49]In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving.[51]In April 2021, Google Brain co-founderSamy Bengioannounced his resignation from the company.[11]Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell.[11]While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations.[11] In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published inNature, by Google's AI team members, Anna Goldie and Azalia Mirhoseini.[52][53]This paper reported good results from the use of AI techniques (in particular reinforcement learning) for theplacement problemforintegrated circuits.[54]However, this result is quite controversial,[55][56][57]as the paper does not contain head-to-head comparisons to existing placers, and is difficult to replicate due to proprietary content. At least one initially favorable commentary has been retracted upon further review,[58]and the paper is under investigation by Nature.[59]
https://en.wikipedia.org/wiki/Google_Brain
The law of total varianceis a fundamental result inprobability theorythat expresses the variance of a random variableYin terms of its conditional variances and conditional means given another random variableX. Informally, it states that the overall variability ofYcan be split into an “unexplained” component (the average of within-group variances) and an “explained” component (the variance of group means). Formally, ifXandYarerandom variableson the sameprobability space, andYhas finitevariance, then: Var⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)\;=\;\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}\;+\;\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} This identity is also known as thevariance decomposition formula, theconditional variance formula, thelaw of iterated variances, or colloquially asEve’s law,[1]in parallel to the “Adam’s law” naming for thelaw of total expectation. Inactuarial science(particularly incredibility theory), the two termsE⁡[Var⁡(Y∣X)]{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]}andVar⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}are called theexpected value of the process variance (EVPV)and thevariance of the hypothetical means (VHM)respectively.[2] LetYbe a random variable andXanother random variable on the same probability space. The law of total variance can be understood by noting: Adding these components yields the total varianceVar⁡(Y){\displaystyle \operatorname {Var} (Y)}, mirroring howanalysis of variancepartitions variation. Suppose five students take an exam scored 0–100. LetY= student’s score andXindicate whether the student is *international* or *domestic*: Both groups share the same mean (50), so the explained varianceVar⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}is 0, and the total variance equals the average of the within-group variances (weighted by group size), i.e. 800. LetXbe a coin flip taking valuesHeadswith probabilityhandTailswith probability1−h. Given Heads,Y~ Normal(μh,σh2{\displaystyle \mu _{h},\sigma _{h}^{2}}); given Tails,Y~ Normal(μt,σt2{\displaystyle \mu _{t},\sigma _{t}^{2}}). ThenE⁡[Var⁡(Y∣X)]=hσh2+(1−h)σt2,{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]=h\,\sigma _{h}^{2}+(1-h)\,\sigma _{t}^{2},}Var⁡(E⁡[Y∣X])=h(1−h)(μh−μt)2,{\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])=h\,(1-h)\,(\mu _{h}-\mu _{t})^{2},}soVar⁡(Y)=hσh2+(1−h)σt2+h(1−h)(μh−μt)2.{\displaystyle \operatorname {Var} (Y)=h\,\sigma _{h}^{2}+(1-h)\,\sigma _{t}^{2}\;+\;h\,(1-h)\,(\mu _{h}-\mu _{t})^{2}.} Consider a two-stage experiment: ThenE⁡[Y∣X=i]=pi,Var⁡(Y∣X=i)=pi(1−pi).{\displaystyle \operatorname {E} [Y\mid X=i]=p_{i},\;\operatorname {Var} (Y\mid X=i)=p_{i}(1-p_{i}).}The overall variance ofYbecomesVar⁡(Y)=E⁡[pX(1−pX)]+Var⁡(pX),{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}p_{X}(1-p_{X}){\bigr ]}+\operatorname {Var} {\bigl (}p_{X}{\bigr )},}withpX{\displaystyle p_{X}}uniform on{p1,…,p6}.{\displaystyle \{p_{1},\dots ,p_{6}\}.} Let(Xi,Yi){\displaystyle (X_{i},Y_{i})},i=1,…,n{\displaystyle i=1,\ldots ,n}, be observed pairs. DefineY¯=E⁡[Y].{\displaystyle {\overline {Y}}=\operatorname {E} [Y].}ThenVar⁡(Y)=1n∑i=1n(Yi−Y¯)2=1n∑i=1n[(Yi−Y¯Xi)+(Y¯Xi−Y¯)]2,{\displaystyle \operatorname {Var} (Y)={\frac {1}{n}}\sum _{i=1}^{n}{\bigl (}Y_{i}-{\overline {Y}}{\bigr )}^{2}={\frac {1}{n}}\sum _{i=1}^{n}{\Bigl [}(Y_{i}-{\overline {Y}}_{X_{i}})+({\overline {Y}}_{X_{i}}-{\overline {Y}}){\Bigr ]}^{2},}whereY¯Xi=E⁡[Y∣X=Xi].{\displaystyle {\overline {Y}}_{X_{i}}=\operatorname {E} [Y\mid X=X_{i}].}Expanding the square and noting the cross term cancels in summation yields:Var⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}\;+\;\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} UsingVar⁡(Y)=E⁡[Y2]−E⁡[Y]2{\displaystyle \operatorname {Var} (Y)=\operatorname {E} [Y^{2}]-\operatorname {E} [Y]^{2}}and thelaw of total expectation:E⁡[Y2]=E⁡[E⁡(Y2∣X)]=E⁡[Var⁡(Y∣X)+E⁡[Y∣X]2].{\displaystyle \operatorname {E} [Y^{2}]=\operatorname {E} {\bigl [}\operatorname {E} (Y^{2}\mid X){\bigr ]}=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X)+\operatorname {E} [Y\mid X]^{2}{\bigr ]}.}SubtractE⁡[Y]2=(E⁡[E⁡(Y∣X)])2{\displaystyle \operatorname {E} [Y]^{2}={\bigl (}\operatorname {E} [\operatorname {E} (Y\mid X)]{\bigr )}^{2}}and regroup to arrive atVar⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}+\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} In a one-wayanalysis of variance, the total sum of squares (proportional toVar⁡(Y){\displaystyle \operatorname {Var} (Y)}) is split into a “between-group” sum of squares (Var⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}) plus a “within-group” sum of squares (E⁡[Var⁡(Y∣X)]{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]}). TheF-testexamines whether the explained component is sufficiently large to indicateXhas a significant effect onY.[3] Inlinear regressionand related models, ifY^=E⁡[Y∣X],{\displaystyle {\hat {Y}}=\operatorname {E} [Y\mid X],}the fraction of variance explained isR2=Var⁡(Y^)Var⁡(Y)=Var⁡(E⁡[Y∣X])Var⁡(Y)=1−E⁡[Var⁡(Y∣X)]Var⁡(Y).{\displaystyle R^{2}={\frac {\operatorname {Var} ({\hat {Y}})}{\operatorname {Var} (Y)}}={\frac {\operatorname {Var} (\operatorname {E} [Y\mid X])}{\operatorname {Var} (Y)}}=1-{\frac {\operatorname {E} [\operatorname {Var} (Y\mid X)]}{\operatorname {Var} (Y)}}.}In the simple linear case (one predictor),R2{\displaystyle R^{2}}also equals the square of thePearson correlation coefficientbetweenXandY. In manyBayesianand ensemble methods, one decomposes prediction uncertainty via the law of total variance. For aBayesian neural networkwith random parametersθ{\displaystyle \theta }:Var⁡(Y)=E⁡[Var⁡(Y∣θ)]+Var⁡(E⁡[Y∣θ]),{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid \theta ){\bigr ]}+\operatorname {Var} {\bigl (}\operatorname {E} [Y\mid \theta ]{\bigr )},}often referred to as “aleatoric” (within-model) vs. “epistemic” (between-model) uncertainty.[4] Credibility theoryuses the same partitioning: the expected value of process variance (EVPV),E⁡[Var⁡(Y∣X)],{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)],}and the variance of hypothetical means (VHM),Var⁡(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X]).}The ratio of explained to total variance determines how much “credibility” to give to individual risk classifications.[2] Forjointly Gaussian(X,Y){\displaystyle (X,Y)}, the fractionVar⁡(E⁡[Y∣X])/Var⁡(Y){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])/\operatorname {Var} (Y)}relates directly to themutual informationI(Y;X).{\displaystyle I(Y;X).}[5]In non-Gaussian settings, a high explained-variance ratio still indicates significant information aboutYcontained inX. The law of total variance generalizes to multiple or nested conditionings. For example, with two conditioning variablesX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}:Var⁡(Y)=E⁡[Var⁡(Y∣X1,X2)]+E⁡[Var⁡(E⁡[Y∣X1,X2]∣X1)]+Var⁡(E⁡[Y∣X1]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X_{1},X_{2}){\bigr ]}+\operatorname {E} {\bigl [}\operatorname {Var} (\operatorname {E} [Y\mid X_{1},X_{2}]\mid X_{1}){\bigr ]}+\operatorname {Var} (\operatorname {E} [Y\mid X_{1}]).}More generally, thelaw of total cumulanceextends this approach to higher moments.
https://en.wikipedia.org/wiki/Law_of_total_variance
Inmathematicsandcomputer science,graph theoryis the study ofgraphs, which aremathematical structuresused to model pairwise relations between objects. A graph in this context is made up ofvertices(also callednodesorpoints) which are connected byedges(also calledarcs,linksorlines). A distinction is made betweenundirected graphs, where edges link two vertices symmetrically, anddirected graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study indiscrete mathematics. Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and relatedmathematical structures. In one restricted but very common sense of the term,[1][2]agraphis anordered pairG=(V,E){\displaystyle G=(V,E)}comprising: To avoid ambiguity, this type of object may be called anundirected simple graph. In the edge{x,y}{\displaystyle \{x,y\}}, the verticesx{\displaystyle x}andy{\displaystyle y}are called theendpointsof the edge. The edge is said tojoinx{\displaystyle x}andy{\displaystyle y}and to beincidentonx{\displaystyle x}and ony{\displaystyle y}. A vertex may exist in a graph and not belong to an edge. Under this definition,multiple edges, in which two or more edges connect the same vertices, are not allowed. In one more general sense of the term allowing multiple edges,[3][4]agraphis an ordered tripleG=(V,E,ϕ){\displaystyle G=(V,E,\phi )}comprising: To avoid ambiguity, this type of object may be called anundirectedmultigraph. Aloopis an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertexx{\displaystyle x}to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph){x,x}={x}{\displaystyle \{x,x\}=\{x\}}which is not in{{x,y}∣x,y∈Vandx≠y}{\displaystyle \{\{x,y\}\mid x,y\in V\;{\textrm {and}}\;x\neq y\}}. To allow loops, the definitions must be expanded. For undirected simple graphs, the definition ofE{\displaystyle E}should be modified toE⊆{{x,y}∣x,y∈V}{\displaystyle E\subseteq \{\{x,y\}\mid x,y\in V\}}. For undirected multigraphs, the definition ofϕ{\displaystyle \phi }should be modified toϕ:E→{{x,y}∣x,y∈V}{\displaystyle \phi :E\to \{\{x,y\}\mid x,y\in V\}}. To avoid ambiguity, these types of objects may be calledundirected simple graph permitting loopsandundirected multigraph permitting loops(sometimes alsoundirectedpseudograph), respectively. V{\displaystyle V}andE{\displaystyle E}are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in theinfinite case. Moreover,V{\displaystyle V}is often assumed to be non-empty, butE{\displaystyle E}is allowed to be the empty set. Theorderof a graph is|V|{\displaystyle |V|}, its number of vertices. Thesizeof a graph is|E|{\displaystyle |E|}, its number of edges. Thedegreeorvalencyof a vertex is the number of edges that are incident to it, where a loop is counted twice. Thedegreeof a graph is the maximum of the degrees of its vertices. In an undirected simple graph of ordern, the maximum degree of each vertex isn− 1and the maximum size of the graph is⁠n(n− 1)/2⁠. The edges of an undirected simple graph permitting loopsG{\displaystyle G}induce a symmetrichomogeneous relation∼{\displaystyle \sim }on the vertices ofG{\displaystyle G}that is called theadjacency relationofG{\displaystyle G}. Specifically, for each edge(x,y){\displaystyle (x,y)}, its endpointsx{\displaystyle x}andy{\displaystyle y}are said to beadjacentto one another, which is denotedx∼y{\displaystyle x\sim y}. Adirected graphordigraphis a graph in which edges have orientations. In one restricted but very common sense of the term,[5]adirected graphis an ordered pairG=(V,E){\displaystyle G=(V,E)}comprising: To avoid ambiguity, this type of object may be called adirected simple graph. In set theory and graph theory,Vn{\displaystyle V^{n}}denotes the set ofn-tuplesof elements ofV,{\displaystyle V,}that is, ordered sequences ofn{\displaystyle n}elements that are not necessarily distinct. In the edge(x,y){\displaystyle (x,y)}directed fromx{\displaystyle x}toy{\displaystyle y}, the verticesx{\displaystyle x}andy{\displaystyle y}are called theendpointsof the edge,x{\displaystyle x}thetailof the edge andy{\displaystyle y}theheadof the edge. The edge is said tojoinx{\displaystyle x}andy{\displaystyle y}and to beincidentonx{\displaystyle x}and ony{\displaystyle y}. A vertex may exist in a graph and not belong to an edge. The edge(y,x){\displaystyle (y,x)}is called theinverted edgeof(x,y){\displaystyle (x,y)}.Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head. In one more general sense of the term allowing multiple edges,[5]adirected graphis an ordered tripleG=(V,E,ϕ){\displaystyle G=(V,E,\phi )}comprising: To avoid ambiguity, this type of object may be called adirected multigraph. Aloopis an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertexx{\displaystyle x}to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph)(x,x){\displaystyle (x,x)}which is not in{(x,y)∣(x,y)∈V2andx≠y}{\displaystyle \left\{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\right\}}. So to allow loops the definitions must be expanded. For directed simple graphs, the definition ofE{\displaystyle E}should be modified toE⊆{(x,y)∣(x,y)∈V2}{\displaystyle E\subseteq \left\{(x,y)\mid (x,y)\in V^{2}\right\}}. For directed multigraphs, the definition ofϕ{\displaystyle \phi }should be modified toϕ:E→{(x,y)∣(x,y)∈V2}{\displaystyle \phi :E\to \left\{(x,y)\mid (x,y)\in V^{2}\right\}}. To avoid ambiguity, these types of objects may be called precisely adirected simple graph permitting loopsand adirected multigraph permitting loops(or aquiver) respectively. The edges of a directed simple graph permitting loopsG{\displaystyle G}is ahomogeneous relation~ on the vertices ofG{\displaystyle G}that is called theadjacency relationofG{\displaystyle G}. Specifically, for each edge(x,y){\displaystyle (x,y)}, its endpointsx{\displaystyle x}andy{\displaystyle y}are said to beadjacentto one another, which is denotedx{\displaystyle x}~y{\displaystyle y}. Graphs can be used to model many types of relations and processes in physical, biological,[7][8]social and information systems.[9]Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the termnetworkis sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is callednetwork science. Withincomputer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of awebsitecan be represented by a directed graph, in which the vertices represent web pages and directed edges representlinksfrom one page to another. A similar approach can be taken to problems in social media,[10]travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases,[11][12]and many other fields. The development ofalgorithmstohandle graphsis therefore of major interest in computer science. Thetransformation of graphsis often formalized and represented bygraph rewrite systems. Complementary tograph transformationsystems focusing on rule-based in-memory manipulation of graphs aregraph databasesgeared towardstransaction-safe,persistentstoring and querying ofgraph-structured data. Graph-theoretic methods, in various forms, have proven particularly useful inlinguistics, since natural language often lends itself well to discrete structure. Traditionally,syntaxand compositional semantics follow tree-based structures, whose expressive power lies in theprinciple of compositionality, modeled in a hierarchical graph. More contemporary approaches such ashead-driven phrase structure grammarmodel the syntax of natural language usingtyped feature structures, which aredirected acyclic graphs. Withinlexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words;semantic networksare therefore important incomputational linguistics. Still, other methods in phonology (e.g.optimality theory, which useslattice graphs) and morphology (e.g. finite-state morphology, usingfinite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such asTextGraphs, as well as various 'Net' projects, such asWordNet,VerbNet, and others. Graph theory is also used to study molecules inchemistryandphysics. Incondensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "theFeynman graphs and rules of calculationsummarizequantum field theoryin a form in close contact with the experimental numbers one wants to understand."[13]In chemistry a graph makes a natural model for a molecule, where vertices representatomsand edgesbonds. This approach is especially used in computer processing of molecular structures, ranging fromchemical editorsto database searching. Instatistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems. Similarly, incomputational neurosciencegraphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures.[14]Graphs are also used to represent the micro-scale channels ofporous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores.Chemical graph theoryuses themolecular graphas a means to model molecules. Graphs and networks are excellent models to study and understand phase transitions and critical phenomena. Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied viapercolation theory.[15] Graph theory is also widely used insociologyas a way, for example, tomeasure actors' prestigeor to explorerumor spreading, notably through the use ofsocial network analysissoftware. Under the umbrella of social networks are many different types of graphs.[17]Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together. Likewise, graph theory is useful inbiologyand conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species. Graphs are also commonly used inmolecular biologyandgenomicsto model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types insingle-cell transcriptome analysis. Another use is to model genes or proteins in apathwayand study the relationships between them, such as metabolic pathways and gene regulatory networks.[18]Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures. Graph theory is also used inconnectomics;[19]nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them. In mathematics, graphs are useful in geometry and certain parts oftopologysuch asknot theory.Algebraic graph theoryhas close links withgroup theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity. A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, orweighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs. The paper written byLeonhard Euleron theSeven Bridges of Königsbergand published in 1736 is regarded as the first paper in the history of graph theory.[20]This paper, as well as the one written byVandermondeon theknight problem,carried on with theanalysis situsinitiated byLeibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized byCauchy[21]andL'Huilier,[22]and represents the beginning of the branch of mathematics known astopology. More than one century after Euler's paper on the bridges ofKönigsbergand whileListingwas introducing the concept of topology,Cayleywas led by an interest in particular analytical forms arising fromdifferential calculusto study a particular class of graphs, thetrees.[23]This study had many implications for theoreticalchemistry. The techniques he used mainly concern theenumeration of graphswith particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published byPólyabetween 1935 and 1937. These were generalized byDe Bruijnin 1959. Cayley linked his results on trees with contemporary studies of chemical composition.[24]The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory. In particular, the term "graph" was introduced bySylvesterin a paper published in 1878 inNature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:[25] The first textbook on graph theory was written byDénes Kőnig, and published in 1936.[26]Another book byFrank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject",[27]and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund thePólya Prize.[28] One of the most famous and stimulating problems in graph theory is thefour color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed byFrancis Guthriein 1852 and its first written record is in a letter ofDe Morganaddressed toHamiltonthe same year. Many incorrect proofs have been proposed, including those by Cayley,Kempe, and others. The study and the generalization of this problem byTait,Heawood,RamseyandHadwigerled to the study of the colorings of the graphs embedded on surfaces with arbitrarygenus. Tait's reformulation generated a new class of problems, thefactorization problems, particularly studied byPetersenandKőnig. The works of Ramsey on colorations and more specially the results obtained byTuránin 1941 was at the origin of another branch of graph theory,extremal graph theory. The four color problem remained unsolved for more than a century. In 1969Heinrich Heeschpublished a method for solving the problem using computers.[29]A computer-aided proof produced in 1976 byKenneth AppelandWolfgang Hakenmakes fundamental use of the notion of "discharging" developed by Heesch.[30][31]The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later byRobertson,Seymour,SandersandThomas.[32] The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works ofJordan,KuratowskiandWhitney. Another important factor of common development of graph theory andtopologycame from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicistGustav Kirchhoff, who published in 1845 hisKirchhoff's circuit lawsfor calculating thevoltageandcurrentinelectric circuits. The introduction of probabilistic methods in graph theory, especially in the study ofErdősandRényiof the asymptotic probability of graph connectivity, gave rise to yet another branch, known asrandom graph theory, which has been a fruitful source of graph-theoretic results. A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph. Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow. A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others. The pioneering work ofW. T. Tuttewas very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings. Graph drawing also can be said to encompass problems that deal with thecrossing numberand its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For aplanar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied. There are other techniques to visualize a graph away from vertices and edges, includingcircle packings,intersection graph, and other visualizations of theadjacency matrix. The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. Thedata structureused depends on both the graph structure and thealgorithmused for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred forsparse graphsas they have smaller memory requirements.Matrixstructures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.[33] List structures include theedge list, an array of pairs of vertices, and theadjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to. Matrix structures include theincidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and theadjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. Thedegree matrixindicates the degree of vertices. TheLaplacian matrixis a modified form of the adjacency matrix that incorporates information about thedegreesof the vertices, and is useful in some calculations such asKirchhoff's theoremon the number ofspanning treesof a graph. Thedistance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of ashortest pathbetween two vertices. There is a large literature ongraphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973). A common problem, called thesubgraph isomorphism problem, is finding a fixed graph as asubgraphin a given graph. One reason to be interested in such a question is that manygraph propertiesarehereditaryfor subgraphs, which means that a graph has the property if and only if all subgraphs have it too. Finding maximal subgraphs of a certain kind is often anNP-complete problem. For example: One special case of subgraph isomorphism is thegraph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time. A similar problem is findinginduced subgraphsin a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example: Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. Aminoror subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example,Wagner's Theoremstates: A similar problem, the subdivision containment problem, is to find a fixed graph as asubdivisionof a given graph. Asubdivisionorhomeomorphismof a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such asplanarity. For example,Kuratowski's Theoremstates: Another problem in subdivision containment is theKelmans–Seymour conjecture: Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by theirpoint-deleted subgraphs. For example: Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following: Constraint modeling theories concern families of directed graphs related by apartial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known. For constraint frameworks which are strictlycompositional, graph unification is the sufficient satisfiability and combination function. Well-known applications includeautomatic theorem provingand modeling theelaboration of linguistic structure. There are numerous problems arising especially from applications that have to do with various notions offlows in networks, for example: Covering problemsin graphs may refer to variousset cover problemson subsets of vertices/subgraphs. Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graphKninton− 1specified trees having, respectively, 1, 2, 3, ...,n− 1edges. Some specific decomposition problems and similar problems that have been studied include: Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
https://en.wikipedia.org/wiki/Graph_theory
Asoftware development kit(SDK) is a collection of software development tools in one installable package. They facilitate the creation of applications by having a compiler, debugger and sometimes asoftware framework. They are normally specific to a hardware platform and operating system combination. To create applications with advanced functionalities such as advertisements, push notifications, etc; mostapplication softwaredevelopers use specific software development kits. Some SDKs are required for developing a platform-specific app. For example, the development of an Android app on the Java platform requires aJava Development Kit. For iOS applications (apps) theiOS SDKis required. For Universal Windows Platform the.NET Framework SDKmight be used. There are also SDKs that add additional features and can be installed in apps to provide analytics, data about application activity, and monetization options. Some prominent creators of these types of SDKs include Google, Smaato, InMobi, and Facebook. An SDK can take the form ofapplication programming interfaces[1]in the form of on-devicelibrariesof reusable functions used to interface to a particularprogramming language, or it may be as complex as hardware-specific tools that can communicate with a particularembedded system.[2]Commontoolsinclude debugging facilities and otherutilities, often presented in anintegrated development environment.[3]SDKs may include sample software and/or technical notes along with documentation, and tutorials to help clarify points made by the primary reference material.[4][5] SDKs often includelicensesthat make them unsuitable for building software intended to be developed under an incompatible license. For example, a proprietary SDK is generally incompatible withfree softwaredevelopment, while aGNU General Public License'd SDK could be incompatible with proprietary software development, for legal reasons.[6][7]However, SDKs built under theGNU Lesser General Public Licenseare typically usable for proprietary development.[8][9]In cases where the underlying technology is new, SDKs may include hardware. For example,AirTag's 2012near-field communicationSDK included both the paying and the reading halves of the necessary hardware stack.[10] The averageAndroidmobile appimplements 15.6 separate SDKs, with gaming apps implementing on average 17.5 different SDKs.[11][12]The most popular SDK categories for Android mobile apps are analytics and advertising.[12] SDKs can be unsafe (because they are implemented within apps yet run separate code). Malicious SDKs (with honest intentions or not) can violate users'data privacy, damage app performance, or even cause apps to be banned fromGoogle Playor theApp Store.[13]New technologies allowapp developersto control and monitor client SDKs in real time. Providers of SDKs for specific systems orsubsystemssometimes substitute a more specific term instead ofsoftware. For instance, bothMicrosoft[14]andCitrix[15]provide a driver development kit for developingdevice drivers. Examples of software development kits for various platforms include:
https://en.wikipedia.org/wiki/Software_development_kit
Minimal mappingsare the result of an advanced technique ofsemantic matching, a technique used incomputer scienceto identify information which is semantically related.[1] Semantic matching has been proposed as a valid solution to the semantic heterogeneity problem, namely, supporting diversity in knowledge.[2]Given any two graph-like structures, e.g. classifications,databases, orXML schemasandontologies, matching is anoperatorwhich identifies those nodes in the two structures that semantically correspond to one another. For example, applied to file systems, it can identify that a folder labeled “car” is semantically equivalent to another folder “automobile” because they are synonyms in English. The proposed technique works on lightweight ontologies, namely, tree structures where each node is labeled by a natural language sentence, for example in English.[3]These sentences are translated into a formal logical formula (according to an unambiguous,artificial language). The formula codifies the meaning of the node, accounting for its position in the graph. For example, in case the folder “car” is under another folder “red” we can say that the meaning of the folder “car” is “red car” in this case. This is translated into the logical formula “red AND car”. The output of matching is a mapping, namely a set of semantic correspondences between the two graphs. Each mapping element is attached with asemantic relation, for exampleequivalence. Among all possible mappings, the minimal mapping is such that all other mapping elements can be computed from the minimal set in an amount of time proportional to the size of the input graphs (linear time) and none of the elements in the minimal set can be dropped without preventing such a computation. The main advantage of minimal mappings is that they minimize the number of nodes for subsequent processing. Notice that this is a rather important feature because the number of possible mappings can reachn×mwithnandmthe size of the two input ontologies. In particular, minimal mappings become crucial with large ontologies, e.g.DMOZ, where even relatively small (non-minimal) subsets of the number of possible mapping elements, potentially millions of them, are unmanageable. Minimal mappings provide usability advantages. Many systems and corresponding interfaces, mostly graphical, have been provided for the management of mappings but all of them scale poorly with the number of nodes. Visualizations of large graphs are rather messy.[4]Maintenance of smaller mappings is much easier, faster and less error prone.
https://en.wikipedia.org/wiki/Minimal_mappings
Asmall-world networkis agraphcharacterized by a highclustering coefficientand lowdistances. In an example of the social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known assix degrees of separation).[1]Specifically, a small-world network is defined to be a network where thetypicaldistanceLbetween two randomly chosen nodes (the number of steps required) grows proportionally to thelogarithmof the number of nodesNin the network, that is:[2] while theglobal clustering coefficientis not small. In the context of a social network, this results in thesmall world phenomenonof strangers being linked by a short chain ofacquaintances. Many empirical graphs show the small-world effect, includingsocial networks, wikis such as Wikipedia,gene networks, and even the underlying architecture of theInternet. It is the inspiration for manynetwork-on-chiparchitectures in contemporarycomputer hardware.[3] A certain category of small-world networks were identified as a class ofrandom graphsbyDuncan WattsandSteven Strogatzin 1998.[4]They noted that graphs could be classified according to two independent structural features, namely theclustering coefficient, and average node-to-nodedistance(also known asaverage shortest path length). Purely random graphs, built according to theErdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named theWatts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999.[5]This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev andMendes; Barmpoutis and Murray, 2010). Small-world networks tend to containcliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a highclustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance ofhubs– nodes in the network with a high number of connections (known as highdegreenodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed throughhubcities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as afat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above. Network small-worldness has been quantified by a small-coefficient,σ{\displaystyle \sigma }, calculated by comparing clustering and path length of a given network to anErdős–Rényi modelwith same degree on average.[6][7] Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (ω{\displaystyle \omega }) is defined as[8] Where the characteristic path lengthLand clustering coefficientCare calculated from the network you are testing,Cℓis the clustering coefficient for an equivalent lattice network andLris the characteristic path length for an equivalent random network. Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as[9] Bothω′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for whichω′ = 1, and thusωaims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network whereC≈CℓandL≈Lr.[9] Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks,networks of brain neurons, voter networks, telephone call graphs, and airport networks.[10]Cultural networks[11]and wordco-occurrence networks[12]have also been shown to be small-world networks. Networks ofconnected proteinshave small world properties such as power-law obeying degree distributions.[13]Similarlytranscriptional networks, in which the nodes aregenes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.[14] In another example, the famous theory of "six degrees of separation" between people tacitly presumes that thedomain of discourseis the set of people alive at any one time. The number of degrees of separation betweenAlbert EinsteinandAlexander the Greatis almost certainly greater than 30[15]and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body. Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight. Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of thefile drawer effect resulting from the publication bias). It is hypothesized by some researchers, such asAlbert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage bymutationorviral infection. In a small world network with a degree distribution following apower-law, deletion of a random node rarely causes a dramatic increase inmean-shortest pathlength (or a dramatic decrease in theclustering coefficient). This follows from the fact that most shortest paths between nodes flow throughhubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction ofhubs, the probability of deleting an important node is very low. For example, if the small airport inSun Valley, Idahowas shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago'sO'Hare airport, are shut down because of snow; many people have to take additional flights. By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure. The main mechanism to construct small-world networks is theWatts–Strogatz mechanism. Small-world networks can also be introduced with time-delay,[16]which will not only produce fractals but also chaos[17]under the right conditions, or transition to chaos in dynamics networks.[18] Soon after the publication ofWatts–Strogatz mechanism, approaches have been developed byMashaghiand co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.[19] Degree–diametergraphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (thediameterof the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to theMoore bound. Another way to construct a small world network from scratch is given in Barmpoutiset al.,[20]where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph. Small-world properties can arise naturally in social networks and other real-world systems via the process ofdual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase. Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links.[21]For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying.[21]It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according toL∝log⁡log⁡N{\displaystyle L\propto \log \log N}.[22] The advantages to small world networking forsocial movement groupsare their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.[23] The small world network model is directly applicable toaffinity grouptheory represented in sociological arguments byWilliam Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action.[24]Clay Shirkyargues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network.[23]The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the1999 Seattle WTO protests. Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics.[25]The seismic network in the Southern California region may be a small-world network.[26]The examples above occur on very different spatial scales, demonstrating thescale invarianceof the phenomenon in the earth sciences. Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure.[27][28]The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository. TheFreenetpeer-to-peer network has been shown to form a small-world network in simulation,[29]allowing information to be stored and retrieved in a manner that scales efficiency as the network grows. Nearest Neighbor Searchsolutions likeHNSWuse small-world networks to efficiently find the information in large item corpuses.[30][31] Both anatomical connections in thebrain[32]and the synchronization networks of cortical neurons[33]exhibit small-world topology. Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering.[34]The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans.[35]Advances inconnectomicsandnetwork neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.[36] In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost.[36]The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks.[37]High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication.[38]This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network.[39]Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.[9] In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.[6] A small-world network of neurons can exhibitshort-term memory. A computer model developed bySara Solla[40][41]had two stable states, a property (calledbistability) thought to be important inmemorystorage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understandseizures.[42]
https://en.wikipedia.org/wiki/Small_world_networks
Host Controller InterfaceorHost controller interfacemay refer to:
https://en.wikipedia.org/wiki/Host_controller_interface_(disambiguation)
Thek-server problemis a problem oftheoretical computer sciencein the category ofonline algorithms, one of two abstract problems onmetric spacesthat are central to the theory ofcompetitive analysis(the other beingmetrical task systems). In this problem, an online algorithm must control the movement of a set ofkservers, represented as points in a metric space, and handlerequeststhat are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests. The problem was first posed by Mark Manasse, Lyle A. McGeoch andDaniel Sleator(1988).[1]The most prominent open question concerning thek-server problem is the so-calledk-server conjecture, also posed by Manasse et al. This conjecture states that there is an algorithm for solving thek-server problem in an arbitrarymetric spaceand for any numberkof servers that has competitive ratio exactlyk. Manasse et al. were able to prove their conjecture whenk= 2, and for more general values ofkfor some metric spaces restricted to have exactlyk+1 points.ChrobakandLarmore(1991) proved the conjecture for tree metrics. The special case of metrics in which all distances are equal is called thepaging problembecause it models the problem ofpage replacement algorithmsin memory caches, and was also already known to have ak-competitive algorithm (SleatorandTarjan1985). Fiat et al. (1990) first proved that there exists an algorithm with finite competitive ratio for any constantkand any metric space, and finally Koutsoupias andPapadimitriou(1995) proved that Work Function Algorithm (WFA) has competitive ratio 2k- 1. However, despite the efforts of many other researchers, reducing the competitive ratio tokor providing an improved lower bound remains open as of 2014[update]. The most common believed scenario is that the Work Function Algorithm isk-competitive. To this direction, in 2000 Bartal and Koutsoupias showed that this is true for some special cases (if the metric space is a line, a weighted star or any metric ofk+2 points). Thek-server conjecture has also a version for randomized algorithms, which asks if exists a randomized algorithm with competitive ratio O(logk) in any arbitrary metric space (with at leastk+ 1 points).[2]In 2011, a randomized algorithm with competitive bound Õ(log2k log3n) was found.[3][4]In 2017, a randomized algorithm with competitive bound O(log6k) was announced,[5]but was later retracted.[6]In 2022 it was shown that the randomized version of the conjecture is false.[2][7][8] To make the problem more concrete, imagine sending customer support technicians to customers when they have trouble with their equipment. In our example problem there are two technicians, Mary and Noah, serving three customers, in San Francisco, California; Washington, DC; and Baltimore, Maryland. As ak-server problem, the servers are the technicians, sok= 2 and this is a 2-server problem. Washington and Baltimore are 35 miles (56 km) apart, while San Francisco is 3,000 miles (4,800 km) away from both, and initially Mary and Noah are both in San Francisco. Consider an algorithm for assigning servers to requests that always assigns the closest server to the request, and suppose that each weekday morning the customer in Washington needs assistance while each weekday afternoon the customer in Baltimore needs assistance, and that the customer in San Francisco never needs assistance. Then, our algorithm will assign one of the servers (say Mary) to the Washington area, after which she will always be the closest server and always be assigned to all customer requests. Thus, every day our algorithm incurs the cost of traveling between Washington and Baltimore and back, 70 miles (110 km). After a year of this request pattern, the algorithm will have incurred 20,500 miles (33,000 km) travel: 3,000 to send Mary to the East Coast, and 17,500 for the trips between Washington and Baltimore. On the other hand, an optimal adversary who knows the future request schedule could have sent both Mary and Noah to Washington and Baltimore respectively, paying 6,000 miles (9,700 km) of travel once but then avoiding any future travel costs. The competitive ratio of our algorithm on this input is 20,500/6,000 or approximately 3.4, and by adjusting the parameters of this example the competitive ratio of this algorithm can be made arbitrarily large. Thus we see that always assigning the closest server can be far from optimal. On the other hand, it seems foolish for an algorithm that does not know future requests to send both of its technicians away from San Francisco, as the next request could be in that city and it would have to send someone back immediately. So it seems that it is difficult or impossible for ak-server algorithm to perform well relative to its adversary. However, for the 2-server problem, there exists an algorithm that always has a total travel distance of at most twice the adversary's distance. Thek-server conjecture states that similar solutions exist for problems with any larger number of technicians.
https://en.wikipedia.org/wiki/K-server_problem
Squared deviations from the mean(SDM) result fromsquaringdeviations. Inprobability theoryandstatistics, the definition ofvarianceis either theexpected valueof the SDM (when considering a theoreticaldistribution) or its average value (for actual experimental data). Computations foranalysis of varianceinvolve the partitioning of a sum of SDM. An understanding of the computations involved is greatly enhanced by a study of the statistical value For arandom variableX{\displaystyle X}with meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}, (Its derivation is shownhere.) Therefore, From the above, the following can be derived: The sum of squared deviations needed to calculatesample variance(before deciding whether to divide bynorn− 1) is most easily calculated as From the two derived expectations above the expected value of this sum is which implies This effectively proves the use of the divisorn− 1 in the calculation of anunbiasedsample estimate ofσ2. In the situation where data is available forkdifferent treatment groups having sizeniwhereivaries from 1 tok, then it is assumed that the expected mean of each group is and the variance of each treatment group is unchanged from the population varianceσ2{\displaystyle \sigma ^{2}}. Under the Null Hypothesis that the treatments have no effect, then each of theTi{\displaystyle T_{i}}will be zero. It is now possible to calculate three sums of squares: Under the null hypothesis that the treatments cause no differences and all theTi{\displaystyle T_{i}}are zero, the expectation simplifies to Under the null hypothesis, the difference of any pair ofI,T, andCdoes not contain any dependency onμ{\displaystyle \mu }, onlyσ2{\displaystyle \sigma ^{2}}. The constants (n− 1), (k− 1), and (n−k) are normally referred to as the number ofdegrees of freedom. In a very simple example, 5 observations arise from two treatments. The first treatment gives three values 1, 2, and 3, and the second treatment gives two values 4, and 6. Giving
https://en.wikipedia.org/wiki/Squared_deviations
TheMulticore Associationwas founded in 2005. Multicore Association is a member-funded,non-profit,industryconsortiumfocused on the creation ofopen standardAPIs,specifications, and guidelines that allow system developers andprogrammersto more readily adopt multicore technology into theirapplications. Theconsortiumprovides a neutral forum for vendors and developers who are interested in, working with, and/or proliferating multicore-related products, includingprocessors, infrastructure, devices, software, and applications. Its members represent vendors ofprocessors,operating systems,compilers,developmenttools,debuggers,ESL/EDAtools, and simulators; and application and systemdevelopers. In 2008, theMulticore Communications APIworking group released the consortium's first specification, referred to asMCAPI. MCAPI is amessage-passingAPI that captures the basic elements of communication and synchronization that are required for closely distributed (multiple cores on a chip and/or chips on acircuit board) embedded systems. The target systems for MCAPI span multiple dimensions ofheterogeneity(e.g., core heterogeneity,interconnect fabricheterogeneity, memory heterogeneity,operating systemheterogeneity, softwaretoolchainheterogeneity, and programming language heterogeneity). In 2011, the MCAPI working group released MCAPI 2.0. The enhanced version adds new features, such as domains for routing purposes. MCAPI Version 2.0 adds a level of hierarchy into that network of nodes through the introduction of "domains". Domains can be used in a variety of implementation-specific ways, such as for representing all the cores on a given chip or for dividing a topology into public and secure areas. MCAPI 2.0 also adds three new types of initialization parameters (node attributes, implementation-specific configurations, implementation information such as the initial network topology or the MCAPI version being executed). The MCAPI WG is chaired by Sven Brehmer. In 2011, theMulticore Resource Management APIworking group released its first specification, referred to asMRAPI. MRAPI is an industry-standard API that specifies essential application-level resource management capabilities. Multicore applications require this API to allow coordinated concurrent access to system resources in situations where: (1) there are not enough resources to dedicate to individualtasksor processors, and/or (2) theRun time (program lifecycle phase)system does not provide a uniformly accessible mechanism for coordinating resource sharing. This API is applicable to both SMP and AMP embedded multicore implementations (whereby AMP refers to heterogeneous both in terms of software and hardware). MRAPI (in conjunction with other Multicore Association APIs) can serve as a valuable tool for implementing applications, as well as for implementing such full-featured resource managers and other types of layered services. The MRAPI WG was chaired by Jim Holt. In 2013, theMulticore Task Management API(MTAPI) working group released its first specification. MTAPI is a standard specification for an application program interface (API) that supports the coordination of tasks on embedded parallel systems with homogeneous and heterogeneous cores. Core features of MTAPI are runtime scheduling and mapping of tasks to processor cores. Due to its dynamic behavior, MTAPI is intended for optimizing throughput on multicore-systems, allowing the software developer to improve the task scheduling strategy for latency and fairness. This working group was chaired by Urs Gleim ofSiemens. In 2013, theMulticore Programming Practices(MPP) working group delivered amulticoresoftware programming guide for the industry that aids in improving consistency and understanding ofmulticoreprogramming issues. The MPP guide provides best practices leveraging theC/C++language to generate a guide of genuine value to engineers who are approaching multicore programming. This working group was chaired by Rob Oshana ofNXP Semiconductorsand David Stewart of CriticalBlue. in 2015, theSoftware/Hardware Interface for Multicore/Manycore(SHIM) working group delivered a specification to define an architecture description standard useful for software design. Some architectural features that SHIM describes are the hardware topology including processorcores,accelerators,caches, and inter-core communication channels, with selected details of each element, and instruction, memory, and communication performance information. This working group was chaired by Masaki Gondo of eSOL[1]. The OpenAMP Multicore Framework is anopen sourceframework for developing asymmetric multi-processing (AMP) systems application software,[1]similar toOpenMPfor symmetric multi-processing systems.[2] There are several implementations of OpenAMP Multicore Framework, each one intended to interoperate with all the other implementations over the OpenAMP API. One implementation of the Multicore Framework, originally developed for the XilinxZynq, has been open sourced under the OpenAMP open source project.[3][4]MentorEmbedded Multicore Framework (MEMF) is a proprietary implementation of the OpenAMP standard.[4] The OpenAMP API standard is managed under the umbrella of Multicore Association.[4]
https://en.wikipedia.org/wiki/Multicore_Association
Pre-boot authentication(PBA) orpower-on authentication(POA)[1]serves as an extension of theBIOS,UEFIor boot firmware and guarantees a secure, tamper-proof environment external to theoperating systemas a trusted authentication layer. The PBA prevents anything being read from the hard disk such as the operating system until the user has confirmed they have the correct password or other credentials includingmulti-factor authentication.[2] A PBA environment serves as an extension of the BIOS, UEFI or boot firmware and guarantees a secure, tamper-proof environment external to the operating system as a trusted authentication layer.[2]The PBA prevents any operating system from loading until the user has confirmed he/she has the correct password to unlock the computer.[2]That trusted layer eliminates the possibility that one of the millions of lines of OS code can compromise the privacy of personal or company data.[2] in BIOS mode: in UEFI mode: Pre-boot authentication can by performed by an add-on of the operating system like LinuxInitial ramdiskor Microsoft's boot software of the system partition (or boot partition) or by a variety offull disk encryption(FDE) vendors that can be installed separately to the operating system. Legacy FDE systems tended to rely upon PBA as their primary control. These systems have been replaced by systems using hardware-based dual-factor systems likeTPMchips or other proven cryptographic approaches. However, without any form of authentication (e.g. a fully transparent authentication loading hidden keys), encryption provides little protection from advanced attackers as this authentication-less encryption fully rely on the post-boot authentication comes fromActive Directoryauthentication at theGINAstep of Windows. Microsoft released BitLocker Countermeasures[3]defining protection schemes for Windows. For mobile devices that can be stolen and attackers gain permanent physical access (paragraph Attacker with skill and lengthy physical access) Microsoft advise the use of pre-boot authentication and to disable standby power management. Pre-boot authentication can be performed with TPM with PIN protector or any 3rd party FDA vendor. Best security is offered by offloading the cryptographic encryption keys from the protected client and supplying key material externally within the user authentication process. This method eliminates attacks on any built-in authentication method that are weaker than a brute-force attack to the symmetric AES keys used for full disk encryption. Without cryptographic protection of a hardware (TPM) supported secure boot environment, PBA is easily defeated withEvil Maidstyle of attacks. However, with modern hardware (includingTPMor cryptographic multi-factor authentication) most FDE solutions are able to ensure that removal of hardware for brute-force attacks is no longer possible. The standard complement of authentication methods exist for pre-boot authentication including:
https://en.wikipedia.org/wiki/Pre-boot_authentication
Habitat conservationis a management practice that seeks toconserve, protect and restorehabitatsand prevent speciesextinction,fragmentationor reduction inrange.[1]It is a priority of many groups that cannot be easily characterized in terms of any oneideology. For much of human history,naturewas seen as aresourcethat could be controlled by the government and used for personal andeconomic gain. The idea was that plants only existed to feed animals and animals only existed to feed humans.[2]The value of land was limited only to the resources it provided such asfertile soil,timber, andminerals. Throughout the 18th and 19th centuries, social views started to change and conservation principles were first practically applied to theforestsofBritish India. The conservation ethic that began to evolve included three core principles: 1) human activities damage theenvironment, 2) there was acivic dutyto maintain the environment for future generations, and 3) scientific, empirically-based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing numerous medico-topographical reports that demonstrated the damage from large-scaledeforestationanddesiccation, and lobbying extensively for the institutionalization offorest conservationactivities in British India through the establishment ofForest Departments.[3] TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world.[4]Governor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation program in 1855, a model that soon spread toother colonies, as well to theUnited States,[5][6][7]whereYellowstone National Parkwas opened in 1872 as the world's first national park.[8] Rather than focusing on the economic or material benefits from nature, humans began to appreciate the value of nature itself and the need to protect it.[9]By the mid-20th century, countries such as the United States, Canada, and Britain instigated laws and legislation in order to ensure that the most fragile and beautiful environments would be protected for posterity. Today, with the help ofNGO'sand governments worldwide, a strongmovementis mobilizing with the goal of protecting habitats and preservingbiodiversityon a global scale. The commitments and actions of small volunteer associations in villages and towns, that endeavour to emulate the work of well knownconservation organisations, are paramount in ensuring generations that follow understand the importance of natural resource conservation. Natural habitats can provideEcosystem servicesto humans, which are "any positive benefit that wildlife or ecosystems provide to people."[10]Thenatural environmentis a source for a wide range of resources that can be exploited foreconomicprofit, for example timber is harvested from forests and clean water is obtained from natural streams. However,land developmentfrom anthropogenic economic growth often causes a decline in the ecological integrity of nearby natural habitat. For instance, this was an issue in the northern Rocky Mountains of the US.[11] However, there is also the economic value in conserving natural habitats. Financial profit can be made from tourist revenue, for example in the tropics where species diversity is high, or in recreational sports which take place in natural environments such ashikingandmountain biking. The cost of repairing damaged ecosystems is considered to be much higher than the cost of conserving natural ecosystems.[12] Measuring the worth of conserving different habitat areas is often criticized as being too utilitarian from a philosophical point of view.[13] Habitat conservation is important in maintainingbiodiversity, which refers to the variability in populations, organisms, and gene pools, as well as habitats and ecosystems.[14]Biodiversity is also an essential part of global food security. There is evidence to support a trend of accelerating erosion of thegenetic resourcesof agricultural plants and animals.[15]An increase in genetic similarity of agricultural plants and animals means an increased risk of food loss from major epidemics. Wild species of agricultural plants have been found to be more resistant to disease, for example the wild corn species Teosinte is resistant to 4 corn diseases that affect human grown crops.[16]A combination of seed banking and habitat conservation has been proposed to maintain plant diversity for food security purposes.[17]It has been shown that focusing conversation efforts on ecosystems "within multiple trophic levels" can lead to a better functioning ecosystem with more biomass.[18] Pearce and Moran outlined the following method for classifying environmental uses:[19] Habitat lossand destruction can occur both naturally and through anthropogenic causes. Events leading to naturalhabitat lossinclude climate change, catastrophic events such as volcanic explosions and through the interactions of invasive and non-invasive species. Natural climate change, events have previously been the cause of many widespread and large scale losses in habitat. For example, some of the mass extinction events generally referred to as the "Big Five" have coincided with large scale such as the Earth entering an ice age, or alternate warming events.[20]Other events in the big five also have their roots in natural causes, such as volcanic explosions and meteor collisions.[21][22]TheChicxulubimpact is one such example, which has previously caused widespread losses in habitat as the Earth either received less sunlight or grew colder, causing certain fauna and flora to flourish whilst others perished. Previously known warm areas in the tropics, the most sensitive habitats on Earth, grew colder, and areas such as Australia developed radically different flora and fauna to those seen today. The big fivemass extinction eventshave also been linked to sea level changes, indicating that large scale marine species loss was strongly influenced by loss in marine habitats, particularly shelf habitats.[23]Methane-driven oceanic eruptions have also been shown to have caused smaller mass extinction events.[24] Humans have been the cause of many species’ extinction. Due to humans’ changing and modifying their environment, the habitat of other species often become altered or destroyed as a result of human actions.[25]The altering of habitats will cause habitat fragmentation, reducing the species' habitat and decreasing their dispersal range. This increases species isolation which then causes their population to decline.[25]Even before the modern industrial era, humans were having widespread, and major effects on the environment. A good example of this is found in Aboriginal Australians andAustralian megafauna.[26]Aboriginal hunting practices, which included burning large sections of forest at a time, eventually altered and changed Australia's vegetation so much that many herbivorous megafauna species were left with no habitat and were driven into extinction. Once herbivorous megafauna species became extinct, carnivorous megafauna species soon followed. In the recent past, humans have been responsible for causing more extinctions within a given period of time than ever before.Deforestation,pollution, anthropogenicclimate changeand human settlements have all been driving forces in altering or destroying habitats.[27]The destruction of ecosystems such as rainforests has resulted in countless habitats being destroyed. Thesebiodiversity hotspotsare home to millions of habitat specialists, which do not exist beyond a tiny area.[28]Once their habitat is destroyed, they cease to exist. This destruction has a follow-on effect, as species which coexist or depend upon the existence of other species also become extinct, eventually resulting in the collapse of an entire ecosystem.[29][30]These time-delayed extinctions are referred to as the extinction debt, which is the result of destroying and fragmenting habitats. As a result of anthropogenic modification of the environment, the extinction rate has climbed to the point where the Earth is now within asixth mass extinctionevent, as commonly agreed by biologists.[31]This has been particularly evident, for example, in the rapid decline in the number ofamphibianspecies worldwide.[32] Adaptive management addresses the challenge of scientific uncertainty in habitat conservation plans by systematically gathering and applying reliable information to enhance conservation strategies over time. This approach allows for adjustments in management practices based on new insights, making conservation efforts more effective.[33]Determining the size, type and location of habitat to conserve is a complex area of conservation biology. Although difficult to measure and predict, the conservation value of a habitat is often a reflection of the quality (e.g.species abundanceand diversity), endangerment of encompassing ecosystems, and spatial distribution of that habitat.[34] Habitat restoration is a subset of habitat conservation and its goals include improving the habitat and resources ranging from one species to several species[35]TheSociety for Ecological Restoration'sInternational Science and Policy Working Group define restoration as "the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed."[36]The scale of habitat restoration efforts can range from small to large areas of land depending on the goal of the project.[37]Elements of habitat restoration include developing a plan and embedding goals within that plan, and monitoring and evaluating species.[38]Considerations such as the species type, environment, and context are aspects of planning a habitat restoration project.[37]Efforts to restore habitats that have been altered by anthropogenic activities has become a global endeavor, and is used to counteract the effects of habitat destruction by humans.[39][40]Miller and Hobbs state three constraints on restoration: "ecological, economic, and social" constraints.[37]Habitat restoration projects include Marine Debris Mitigation for Navassa Island National Wildlife Refuge in Haiti and Lemon Bay Preserve Habitat Restoration in Florida.[41] Habitat conservation is vital for protecting species and ecological processes. It is important to conserve and protect the space/ area in which that species occupies.[42]Therefore, areas classified as ‘biodiversity hotspots’, or those in which a flagship, umbrella, or endangered species inhabits are often the habitats that are given precedence over others. Species that possess an elevated risk of extinction are given the highest priority and as a result of conserving their habitat, other species in that community are protected thus serving as an element of gap analysis. In the United States of America, aHabitat Conservation Plan(HCP) is often developed to conserve the environment in which a specific species inhabits. Under the U.S.Endangered Species Act(ESA) the habitat that requires protection in an HCP is referred to as the ‘critical habitat’. Multiple-species HCPs are becoming more favourable than single-species HCPs as they can potentially protect an array of species before they warrant listing under the ESA, as well as being able to conserve broad ecosystem components and processes . As of January 2007, 484 HCPs were permitted across the United States, 40 of which covered 10 or more species. The San Diego Multiple Species Conservation Plan (MSCP) encompasses 85 species in a total area of 26,000-km2. Its aim is to protect the habitats of multiple species and overall biodiversity by minimizing development in sensitive areas. HCPs require clearly defined goals and objectives, efficient monitoring programs, as well as successful communication and collaboration with stakeholders and land owners in the area. Reserve design is also important and requires a high level of planning and management in order to achieve the goals of the HCP. Successful reserve design often takes the form of a hierarchical system with the most valued habitats requiring high protection being surrounded by buffer habitats that have a lower protection status. Like HCPs, hierarchical reserve design is a method most often used to protect a single species, and as a result habitat corridors are maintained, edge effects are reduced and a broader suite of species are protected. A range of methods and models currently exist that can be used to determine how much habitat is to be conserved in order to sustain aviable population, includingResource Selection Functionand Step Selection models. Modelling tools often rely on the spatial scale of the area as an indicator of conservation value. There has been an increase in emphasis on conserving few large areas of habitat as opposed to many small areas. This idea is often referred to as the "single large or several small",SLOSS debate, and is a highly controversial area amongconservation biologistsandecologists. The reasons behind the argument that "larger is better" include the reduction in the negative impacts of patch edge effects, the general idea that species richness increases withhabitat areaand the ability of larger habitats to support greater populations with lower extinction probabilities. Noss & Cooperrider support the "larger is better" claim and developed a model that implies areas of habitat less than 1000ha are "tiny" and of low conservation value.[43]However, Shwartz suggests that although "larger is better", this does not imply that "small is bad". Shwartz argues that human inducedhabitat lossleaves no alternative to conserving small areas. Furthermore, he suggests many endangered species which are of high conservation value, may only be restricted to smallisolated patchesof habitat, and thus would be overlooked if larger areas were given a higher priority. The shift to conserving larger areas is somewhat justified in society by placing more value on larger vertebrate species, which naturally have larger habitat requirements. Since its formation in 1951The Nature Conservancyhas slowly developed into one of the world's largest conservation organizations. Currently operating in over 30 countries, across five continents worldwide, The Nature Conservancy aims to protect nature and its assets for future generations.[44]The organization purchases land or accepts land donations with the intention of conserving its natural resources. In 1955 The Nature Conservancy purchased its first 60-acre plot near the New York/Connecticut border in the United States of America. Today the Conservancy has expanded to protect over 119 million acres of land, 5,000 river miles as well as participating in over 1,000 marine protection programs across the globe . Since its beginnings The Nature Conservancy has understood the benefit in taking a scientific approach towards habitat conservation. For the last decade the organization has been using a collaborative, scientific method known as "Conservation by Design." By collecting and analyzing scientific data The Conservancy is able to holistically approach the protection of various ecosystems. This process determines the habitats that need protection, specific elements that should be conserved as well as monitoring progress so more efficient practices can be developed for the future.[45] The Nature Conservancy currently has a large number of diverse projects in operation. They work with countries around the world to protect forests, river systems, oceans, deserts and grasslands. In all cases the aim is to provide a sustainable environment for both the plant and animal life forms that depend on them as well as all future generations to come.[46] TheWorld Wildlife Fund(WWF) was first formed in after a group of passionate conservationists signed what is now referred to as the Morges Manifesto.[47]WWF is currently operating in over 100 countries across 5 continents with a current listing of over 5 million supporters. One of the first projects of WWF was assisting in the creation of the Charles Darwin Research Foundation which aided in the protection of diverse range of unique species existing on the Galápagos’ Islands, Ecuador. It was also a WWF grant that helped with the formation of the College of African Wildlife Management in Tanzania which today focuses on teaching a wide range of protected area management skills in areas such as ecology, range management and law enforcement.[48]The WWF has since gone on to aid in the protection of land in Spain, creating theCoto Doñana National Parkin order to conserve migratory birds and TheDemocratic Republic of Congo, home to the world's largest protected wetlands. The WWF also initiated a debt-for-nature concept which allows the country to put funds normally allocated to paying off national debt, into conservation programs that protect its natural landscapes. Countries currently participating includeMadagascar, the first country to participate which since 1989 has generated over $US50 million towards preservation,Bolivia,Costa Rica,Ecuador,Gabon, thePhilippinesandZambia. Rare has been in operation since 1973 with current global partners in over 50 countries and offices in the United States of America, Mexico, the Philippines, China and Indonesia. Rare focuses on the human activity that threatens biodiversity and habitats such as overfishing and unsustainable agriculture. By engaging local communities and changing behaviour Rare has been able to launch campaigns to protect areas in most need of conservation.[49]The key aspect of Rare's methodology is their "Pride Campaigns". For example, in the Andes in South America, Rare has incentives to develop watershed protection practices. In the Southeast Asia's "coral triangle" Rare is training fishers in local communities to better manage the areas around the coral reefs in order to lessen human impact.[50]Such programs last for three years with the aim of changing community attitudes so as to conserve fragile habitats and provide ecological protection for years to come. WWF Netherlands, along with ARK Nature, Wild Wonders of Europe, and Conservation Capital have started theRewilding Europeproject. This project intents torewildseveral areas in Europe.[51]
https://en.wikipedia.org/wiki/Habitat_conservation
Radioteletype(RTTY)[a]is atelecommunicationssystem consisting originally of two or moreelectromechanicalteleprintersin different locations connected byradiorather than a wired link. Radioteletype evolved from earlier landline teleprinter operations that began in the mid-1800s.[1]TheUS Navy Departmentsuccessfully tested printing telegraphy between an airplane and ground radio station in 1922. Later that year, the Radio Corporation of America successfully tested printing telegraphy via theirChatham, Massachusetts, radio station to theRMSMajestic. Commercial RTTY systems were in active service betweenSan FranciscoandHonoluluas early as April 1932 and between San Francisco andNew York Cityby 1934. TheUS militaryused radioteletype in the 1930s and expanded this usage duringWorld War II. From the 1980s, teleprinters were replaced bypersonal computers(PCs) runningsoftware to emulate teleprinters. The term radioteletype is used to describe both the original radioteletype system, sometimes described as "Baudot", as well as the entire family of systems connecting two or more teleprinters or PCs using software to emulate teleprinters, over radio, regardless of alphabet, link system or modulation. In some applications, notably military and government, radioteletype is known by the acronym RATT (Radio Automatic Teletype).[2] Landline teleprinter operations began in 1849 when a circuit was put in service betweenPhiladelphiaand New York City.[3]Émile Baudotdesigned a system using a five unit code in 1874 that is still in use today. Teleprinter system design was gradually improved until, at the beginning of World War II, it represented the principal distribution method used by the news services. Radioteletype evolved from these earlier landline teleprinter operations. The US Department of the Navy successfully tested printing telegraphy between an airplane and ground radio station in August 1922.[4][5][6]Later that year, the Radio Corporation of America successfully tested printing telegraphy via their Chatham, MA radio station to the RMSMajestic.[7]An early implementation of the Radioteletype was the Watsongraph,[8]named afterDetroitinventor Glenn Watson in March 1931.[9]Commercial RTTY systems were in active service between San Francisco and Honolulu as early as April 1932[10][11]and between San Francisco and New York City by 1934.[12]The US Military used radioteletype in the 1930s and expanded this usage during World War II.[13]The Navy called radioteletypeRATT(Radio Automatic Teletype) and the Army Signal Corps called radioteletypeSCRT, an abbreviation of Single-Channel Radio Teletype. The military usedfrequency shift keying(FSK) technology and this technology proved very reliable even over long distances. A radioteletype station consists of three distinct parts: the Teletype or teleprinter, themodemand theradio. The Teletype or teleprinter is anelectromechanicalorelectronicdevice. The wordTeletypewas a trademark of the Teletype Corporation, so the terms "TTY", "RTTY", "RATT" and "teleprinter" are usually used to describe a generic device without reference to a particular manufacturer. Electromechanical teleprinters are heavy, complex and noisy, and have largely been replaced with electronic units. The teleprinter includes a keyboard, which is the main means of entering text, and a printer orvisual display unit(VDU). An alternative input device is aperforated tapereader and, more recently, computerstorage media(such as floppy disks). Alternative output devices are tape perforators and computer storage media. The line output of a teleprinter can be at eitherdigital logiclevels (+5 V signifies a logical "1" ormarkand 0 V signifies a logical "0" orspace) orline levels(−80 V signifies a "1" and +80 V a "0"). When no traffic is passed, the line idles at the "mark" state. When a key of the teleprinter keyboard is pressed, a5-bit characteris generated. The teleprinter converts it toserial formatand transmits a sequence of astart bit(a logical 0 or space), then one after the other the 5 data bits, finishing with astop bit(a logical 1 or mark, lasting 1, 1.5 or 2 bits). When a sequence of start bit, 5 data bits and stop bit arrives at the input of the teleprinter, it is converted to a 5-bit word and passed to the printer or VDU. With electromechanical teleprinters, these functions required complicated electromechanical devices, but they are easily implemented with standard digital electronics usingshift registers. Specialintegrated circuitshave been developed for this function, for example theIntersil6402 and 6403.[14]These are stand-aloneUARTdevices, similar to computer serial port peripherals. The 5 data bits allow for only 32 different codes, which cannot accommodate the 26 letters, 10 figures, space, a fewpunctuationmarks and the requiredcontrol codes, such as carriage return, new line, bell, etc. To overcome this limitation, the teleprinter has twostates, theunshiftedorlettersstate and theshiftedornumbersorfiguresstate. The change from one state to the other takes place when the special control codesLETTERSandFIGURESare sent from the keyboard or received from the line. In thelettersstate the teleprinter prints the letters and space while in the shifted state it prints the numerals and punctuation marks. Teleprinters for languages using otheralphabetsalso use an additionalthird shiftstate, in which they print letters in the alternative alphabet. The modem is sometimes called the terminal unit and is an electronic device which is connected between the teleprinter and the radiotransceiver. The transmitting part of the modem converts the digital signal transmitted by the teleprinter or tape reader to one or the other of a pair ofaudio frequencytones, traditionally 2295/2125 Hz (US) or 2125/1955 Hz (Europe). One of the tones corresponds to themarkcondition and the other to thespacecondition. These audio tones, then,modulateanSSBtransmitter to produce the final audio-frequency shift keying (AFSK) radio frequency signal. Some transmitters are capable of directfrequency-shift keying(FSK) as they can directly accept the digital signal and change their transmitting frequency according to themarkorspaceinput state. In this case the transmitting part of the modem is bypassed. On reception, the FSK signal is converted to the original tones by mixing the FSK signal with a local oscillator called the BFO orbeat frequency oscillator. These tones are fed to the demodulator part of the modem, which processes them through a series of filters and detectors to recreate the original digital signal. The FSK signals are audible on a communications radio receiver equipped with a BFO, and have a distinctive "beedle-eeeedle-eedle-eee" sound, usually starting and ending on one of the two tones ("idle on mark"). The transmission speed is a characteristic of the teleprinter while the shift (the difference between the tones representing mark and space) is a characteristic of the modem. These two parameters are therefore independent, provided they have satisfied theminimum shift sizefor a given transmission speed. Electronic teleprinters can readily operate in a variety of speeds, but mechanical teleprinters require the change of gears in order to operate at different speeds. Today, both functions can be performed with modern computers equipped with digital signal processors orsound cards. The sound card performs the functions of the modem and theCPUperforms the processing of the digital bits. This approach is very common inamateur radio, using specialized computer programs likefldigi, MMTTY or MixW. Before the computer mass storage era, most RTTY stations stored text on paper tape using paper tape punchers and readers. The operator would type the message on the TTY keyboard and punch the code onto the tape. The tape could then be transmitted at a steady, high rate, without typing errors. A tape could be reused, and in some cases - especially for use with ASCII on NC Machines - might be made of plastic or even very thin metal material in order to be reused many times. The most common test signal is a series of "RYRYRY" characters, as these form an alternating tone pattern exercising all bits and are easily recognized.Pangramsare also transmitted on RTTY circuits as test messages, the most common one being "The quick brown fox jumps over the lazy dog", and in French circuits, "Voyez le brick géant que j'examine près du wharf" The original (or "Baudot") radioteletype system is based almost invariably on theBaudot codeor ITA-2 5 bit alphabet. The link is based on character asynchronous transmission with 1 start bit and 1, 1.5 or 2 stop bits. Transmitter modulation is normallyFSK(F1B). Occasionally, an AFSK signal modulating an RF carrier (A2B, F2B) is used on VHF or UHF frequencies. Standard transmission speeds are 45.45, 50, 75, 100, 150 and 300 baud. Common carrier shifts are 85 Hz (used on LF and VLF frequencies), 170 Hz, 425 Hz, 450 Hz and 850 Hz, although some stations use non-standard shifts. There are variations of the standard Baudot alphabet to cover languages written in Cyrillic, Arabic, Greek etc., using special techniques.[15][16] Some combinations of speed and shift are standardized for specific services using the original radioteletype system: After World War II,amateur radio operatorsin the U.S. started to receive obsolete but usable Teletype Model 26 equipment from commercial operators with the understanding that this equipment would not be used for or returned to commercial service. "The Amateur Radioteletype and VHF Society" was founded in 1946 in Woodside, NY. This organization soon changed its name to "The VHF Teletype Society" and started US amateur radio operations on2 metersusingaudio frequency shift keying(AFSK). The first two-way amateur radio teletypecontact(QSO) of record took place in May 1946 between Dave Winters, W2AUF, Brooklyn, NY, and W2BFD, John Evans Williams, Woodside Long Island, NY.[21]On the west coast, amateur RTTY also started on 2 meters. Operation on 80 meters, 40 meters and the otherHigh Frequency(HF) amateur radio bands was initially accomplished using make and break keying sincefrequency shift keying(FSK) was not yet authorized. In early 1949, the first American transcontinental two-way RTTYcontactwas accomplished on11 metersusingAFSKbetween Tom McMullen (W1QVF) operating atW1AWand Johnny Agalsoff, W6PSW.[22]The stations effected partial contact on January 30, 1949, and repeated more successfully on January 31. On February 1, 1949, the stations exchanged solid print congratulatory message traffic andrag-chewed. Earlier, on January 23, 1949, William T. Knott, W2QGH, Larchmont, NY, had been able to make rough copy of W6PSW's test transmissions. Whilecontactscould be accomplished, it was quickly realized thatFSKwas technically superior to make and break keying. Due to the efforts of Merrill Swan, W6AEE, of "The RTTY Society of Southern California" publisher ofRTTYand Wayne Green, W2NSD, ofCQ Magazine, amateur radio operators successfully petitioned the U.S.Federal Communications Commission(FCC) to amend Part 12 of the Regulations, which was effective on February 20, 1953.[23]The amended Regulations permittedFSKin the non-voice parts of the80,40, and20 meter bandsand also specified the use of single channel 60 words-per-minute five unit code corresponding toITA2. A shift of850 ± 50Hzwas specified. Amateur radio operators also had to identify their station callsign at the beginning and the end of each transmission and at ten-minute intervals usingInternational Morse code. Use of this wide shift proved to be a problem for amateur radio operations. Commercial operators had already discovered that narrow shift worked best on theHF bands. After investigation and a petition to the FCC, Part 12 was amended, in March 1956, to allow amateur radio operators to use any shift that was 900 Hz or less. The FCCNotice of Proposed Rule Making(NPRM) that resulted in the authorization ofFSKin the amateur high frequency (HF) bands responded to petitions by theAmerican Radio Relay League(ARRL), the National Amateur Radio Council, and a Mr. Robert Weinstein. The NPRM specifically states this, and this information may be found in its entirety in the December 1951 issue ofQST Magazine. WhileThe New RTTY Handbook[23]gives ARRL no credit, it was published byCQ Magazineand its author was aCQcolumnist (CQwas generally hostile to the ARRL at that time). The first RTTY Contest was held by the RTTY Society of Southern California from October 31 to November 1, 1953.[24]Named the RTTY Sweepstakes Contest, twenty nine participants exchanged messages that contained a serial number, originating station call, check or RST report of two or three numbers, ARRL section of originator, local time (0000-2400 preferred) and date. Example: NR 23 W0BP CK MINN 1325 FEB 15. By the late 1950s, the contest exchange was expanded to include band used. Example: NR 23 W0BP CK MINN 1325 FEB 15 FORTY METERS. The contest was scored as follows: One point for each message sent and received entirely by RTTY and one point for each message received and acknowledged by RTTY. The final score was computed by multiplying the total number of message points by the number of ARRL sections worked. Two stations could exchange messages again on a different band for added points, but the section multiplier did not increase when the same section was reworked on a different band. Each DXCC entity was counted as an additional ARRL section for RTTY multiplier credit. A new magazine namedRTTY, later renamedRTTY Journal, also published the first listing of stations, mostly located in the continental US, that were interested in RTTY in 1956.[25]Amateur radio operators used this callbook information to contact other operators both inside and outside the United States. For example, the first recorded USA to New Zealand two-way RTTYcontacttook place in 1956 between W0BP and ZL1WB. By the late 1950s, new organizations focused on amateur radioteletype started to appear. The "British Amateur Radio Teletype Group", BARTG, now known as the "British Amateur Radio Teledata Group"[26]was formed in June 1959. The Florida RTTY Society was formed in September 1959.[27]Amateur radio operators outside of Canada and the U.S. began to acquire surplus teleprinter and receive permission to get on the air. The first recorded RTTYcontactin the U.K. occurred in September 1959 between G2UK and G3CQE. A few weeks later, G3CQE had the first G/VE RTTY QSO with VE7KX.[28]This was quickly followed up by G3CQE QSOs with VK3KF and ZL3HJ.[29]Information on how to acquire surplus teleprinter equipment continued to spread and before long it was possible to work all continents on RTTY. Amateur radio operators used various equipment designs to get on the air using RTTY in the 1950s and 1960s. Amateurs used their existing receivers for RTTY operation but needed to add a terminal unit, sometimes called a demodulator, to convert the received audio signals to DC signals for the teleprinter. Most of the terminal unit equipment used for receiving RTTY signals was home built, using designs published in amateur radio publications. These original designs can be divided into two classes of terminal units: audio-type and intermediate frequency converters. The audio-type converters proved to be more popular with amateur radio operators. The Twin City, W2JAV and W2PAT designs were examples of typical terminal units that were used into the middle 1960s. The late 1960s and early 1970s saw the emergence of terminal units designed by W6FFC, such as the TT/L, ST-3, ST-5, and ST-6. These designs were first published inRTTY Journalstarting in September 1967 and ending in 1970. An adaptation of the W6FFC TT/L terminal unit was developed by Keith Petersen, W8SDZ, and it was first published in theRTTY Journalin September 1967. The drafting of the schematic in the article was done by Ralph Leland, W8DLT. Amateur radio operators needed to modify their transmitters to allow for HF RTTY operation. This was accomplished by adding a frequency shift keyer that used a diode to switch a capacitor in and out of the circuit, shifting the transmitter’s frequency in synchronism with the teleprinter signal changing from mark to space to mark. A very stable transmitter was required for RTTY. The typical frequency multiplication type transmitter that was popular in the 1950s and 1960s would be relatively stable on80 metersbut become progressively less stable on40 meters,20 meters, and15 meters. By the middle 1960s, transmitter designs were updated, mixing a crystal-controlled high frequency oscillator with a variable low frequency oscillator, resulting in better frequency stability across all amateur radio HF bands. During the early days of Amateur RTTY, the RTTYWorked All ContinentsAward was conceived by the RTTY Society of Southern California and issued by RTTY Journal.[30]The first amateur radio station to achieve this WAC – RTTY Award was VE7KX.[31]The first stations recognized as having achieved single band WAC RTTY were W1MX (3.5 MHz); DL0TD (7.0 MHz); K3SWZ (14.0 MHz); W0MT (21.0 MHz) and FG7XT (28.0 MHz).[32]The ARRL began issuingWACRTTY certificates in 1969. By the early 1970s, amateur radio RTTY had spread around the world and it was finally possible to work more than 100 countries via RTTY. FG7XT was the first amateur radio station to claim to achieve this honor. However, Jean did not submit hisQSLcards for independent review. ON4BX, in 1971, was the first amateur radio station to submit his cards to the DX editor ofRTTY Journaland to achieve this honor.[33]The ARRL began issuingDXCCRTTY Awards on November 1, 1976.[34]Prior to that date, an award for working 100 countries on RTTY was only available via RTTY Journal. In the 1950s through the 1970s, "RTTY art" was a popular on-air activity. This consisted of (sometimes very elaborate and artistic) pictures sent over RTTY through the use of lengthy punched tape transmissions and then printed by the receiving station on paper. On January 7, 1972, the FCC amended Part 97 to allow faster RTTY speeds. Four standard RTTY speeds were authorized, namely, 60words per minute(WPM) (45baud), 67WPM(50 baud), 75WPM(56.25 baud), and 100WPM(75 baud). Many amateur radio operators had equipment that was capable of being upgraded to 75 and 100 words per minute by changing teleprinter gears. While there was an initial interest in 100WPMoperation, many amateur radio operators moved back to 60WPM. Some of the reasons for the failure of 100WPMHF RTTY included poor operation of improperly maintained mechanical teleprinters, narrow bandwidth terminal units, continued use of 170 Hz shift at 100WPM, and excessive error rates due to multipath distortion and the nature of ionospheric propagation. The FCC approved the use ofASCIIby amateur radio stations on March 17, 1980 with speeds up to 300baudfrom3.5 MHzto21.25 MHzand 1200 baud between28 MHzand225 MHz. Speeds up to 19.2 kilobaud was authorized on amateur frequencies above420 MHz.[35] These symbol rates were later modified:[36] The requirement for amateur radio operators in the U.S. to identify their stationcallsignat the beginning and the end of each digital transmission, and at ten-minute intervals using International Morse code, was finally lifted by the FCC on June 15, 1983. RTTY has a typicalbaud ratefor Amateur operation of 45.45 baud (approximately 60 words per minute). It remains popular as a "keyboard to keyboard" mode in Amateur Radio.[37]RTTY has declined in commercial popularity as faster, more reliable alternative data modes have become available, using satellite or other connections. For its transmission speed, RTTY has lowspectral efficiency. The typical RTTY signal with 170 Hz shift at 45.45 baud requires around 250 Hz receiver bandwidth, more than double that required byPSK31. In theory, at this baud rate, the shift size can be decreased to 22.725 Hz, reducing the overall band footprint substantially. Because RTTY, using eitherAFSKorFSKmodulation, produces a waveform with constant power, a transmitter does not need to use alinear amplifier, which is required for many digital transmission modes. A more efficientClass C amplifiermay be used. RTTY, using either AFSK or FSK modulation, is moderately resistant to vagaries of HF propagation and interference, however modern digital modes, such asMFSK, useForward Error Correctionto provide much better data reliability. Principally, the primary users are those who need robust shortwave communications. Examples are: One regular service transmitting RTTY meteorological information is theGerman Meteorological Service(Deutscher Wetterdienst or DWD). The DWD regularly transmit two programs on various frequencies onLFandHFin standard RTTY (ITA-2 alphabet). The list of callsigns, frequencies,baudrates and shifts are as follows:[19] The DWD signals can be easily received in Europe, North Africa and parts of North America.
https://en.wikipedia.org/wiki/Radioteletype
ISO 4217is a standard published by theInternational Organization for Standardization(ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables:[1] The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained bySIX Groupon behalf ofISOand theSwiss Association for Standardization.[2] The ISO 4217 code list is used inbankingandbusinessglobally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly thatexchange ratespublished in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguouscurrency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price. In 1973, the ISO Technical Committee 68 decided to develop codes for the representation of currencies and funds for use in any application of trade, commerce or banking. At the 17th session (February 1978), the relatedUN/ECEGroup of Experts agreed that the three-letter alphabetic codes for International Standard ISO 4217, "Codes for the representation of currencies and funds", would be suitable for use in international trade. Over time, new currencies are created and old currencies are discontinued. Such changes usually originate from the formation of new countries, treaties between countries on shared currencies or monetary unions, orredenominationfrom an existing currency due to excessive inflation. As a result, the list of codes must be updated from time to time. The ISO 4217 maintenance agency is responsible for maintaining the list of codes.[3] In the case of national currencies, the first two letters of the alpha code are the two letters of theISO 3166-1 alpha-2country codeand the third is usually the initial of the currency's main unit.[4]SoJapan's currency code isJPY: "JP" for Japan and "Y" foryen. This eliminates the problem caused by the namesdollar,franc,peso, andpoundbeing used in many countries, each having significantly differing values. While in most cases the ISO code resembles an abbreviation of the currency's full English name, this is not always the case, as currencies such as theAlgerian dinar,Aruban florin,Cayman dollar,renminbi,sterling, and theSwiss franchave been assigned codes which do not closely resemble abbreviations of the official currency names. In some cases, the third letter of the alpha code is not the initial letter of a currency unit name. There may be a number of reasons for this: In addition to codes for most active national currencies ISO 4217 provides codes for "supranational" currencies, procedural purposes, and several things which are "similar to" currencies: The use of the initial letter "X" for these purposes is facilitated by theISO 3166 rulethat no official country code beginning with X will ever be assigned. The inclusion of the EU (denoting theEuropean Union) in theISO 3166-1reserved codes list allows theeuroto be coded as EUR rather than assigned a code beginning with X, even though it is a supranational currency. ISO 4217 also assigns a three-digit numeric code to each currency. This numeric code is usually the same as the numeric code assigned to the corresponding country byISO 3166-1. For example, USD (United States dollar) has numeric code840which is also the ISO 3166-1 code for "US" (United States). The following is a list of active codes of official ISO 4217 currency names as of 1 January 2024[update]. In the standard the values are called "alphabetic code", "numeric code", "minor unit", and "entity". According to UN/CEFACT recommendation 9, paragraphs 8–9 ECE/TRADE/203, 1996:[22] A number of currencies had official ISO 4217 currency codes and currency names until their replacement by another currency. The table below shows the ISO currency codes of former currencies and their common names (which do not always match the ISO 4217 names). That table has been introduced end 1988 by ISO.[23] The 2008 (7th) edition of ISO 4217 says the following about minor units of currency: Requirements sometimes arise for values to be expressed in terms of minor units of currency. When this occurs, it is necessary to know the decimal relationship that exists between the currency concerned and its minor unit. This information has therefore been included in this International Standard and is shown in the column headed "Minor unit" in Tables A.1 and A.2; "0" means that there is no minor unit for that currency, whereas "1", "2" and "3" signify a ratio of 10:1, 100:1 and1000:1 respectively. The names of the minor units are not given. Examples for the ratios of100:1 and1000:1 include the United States dollar and theBahraini dinar, for which the column headed "Minor unit" shows "2" and "3", respectively. As of 2021[update], two currencies have non-decimal ratios, theMauritanian ouguiyaand theMalagasy ariary; in both cases the ratio is 5:1. For these, the "Minor unit" column shows the number "2". Some currencies, such as theBurundian franc, do not in practice have any minor currency unit at all. These show the number "0", as with currencies whose minor units are unused due to negligible value.[citation needed] The ISO standard does not regulate either the spacing, prefixing or suffixing in usage of currency codes. Thestyle guideof theEuropean Union's Publication Office declares that, for texts issued by or through the Commission inEnglish,Irish,Latvian, andMaltese, the ISO 4217 code is to be followed by a "hard space" (non-breaking space) and the amount:[47] and for texts inBulgarian,Croatian,Czech,Danish,Dutch,Estonian,Finnish,French,German,Greek,Hungarian,Italian,Lithuanian,Polish,Portuguese,Romanian,Slovak,Slovene,Spanish, andSwedishthe order is reversed; the amount is followed by a non-breaking space and the ISO 4217 code: As illustrated, the order is determined not by the currency but by the native language of the document context. TheUS dollarhas two codes assigned: USD and USN ("US dollar next day"[definition needed]). The USS (same day) code is not in use any longer, and was removed from the list of active ISO 4217 codes in March 2014. A number of active currencies do not have an ISO 4217 code, because they may be: These currencies include: SeeCategory:Fixed exchange ratefor a list of all currently pegged currencies. Despite having no presence or status in the standard,three letter acronymsthat resemble ISO 4217 coding are sometimes used locally or commercially to representde factocurrencies or currency instruments. The following non-ISO codes were used in the past. Minor units of currency (also known as currency subdivisions or currency subunits) are often used for pricing and tradingstocksand other assets, such as energy,[73]but are not assigned codes by ISO 4217. Two conventions for representing minor units are in widespread use: A third convention is similar to the second one but uses an upper-case letter, e.g. ZAC[77]for the South African Cent. Cryptocurrencieshavenotbeen assigned an ISO 4217 code.[78]However, some cryptocurrencies andcryptocurrency exchangesuse a three-letter acronym that resemble an ISO 4217 code.
https://en.wikipedia.org/wiki/ISO_4217
Insoftware project management,software testing, andsoftware engineering,verification and validationis the process of checking that a software engineer system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to assoftware quality control. It is normally the responsibility ofsoftware testersas part of thesoftware development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?" Verification and validation are not the same thing, although they are often confused.Boehmsuccinctly expressed the difference as[1] "Building the product right" checks that thespecificationsare correctly implemented by the system while "building the right product" refers back to theuser's needs. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance. Ideally,formal methodsprovide a mathematical guarantee that software meets its specifications. Building the product right implies the use of the Requirements Specification as input for the next phase of the development process, the design process, the output of which is the Design Specification. Then, it also implies the use of the Design Specification to feed the construction process. Every time the output of a process correctly implements its input specification, the software product is one step closer to final verification. If the output of a process is incorrect, the developers have not correctly implemented some component of that process. This kind of verification is called "artifact or specification verification". It would imply to verify if the specifications are met by running the software but this is not possible (e.g., how can anyone know if the architecture/design/etc. are correctly implemented by running the software?). Only by reviewing its associated artifacts, can someone conclude whether or not the specifications are met. The output of each software development process stage can also be subject to verification when checked against its input specification (see the definition by CMMI below). Examples of artifact verification: Software validation checks that the software product satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements, not as specification artifacts or as needs of those who will operate the software only; but, as the needs of all the stakeholders (such as users, operators, administrators, managers, investors, etc.). There are two ways to perform software validation: internal and external. During internal software validation, it is assumed that the goals of the stakeholders were correctly understood and that they were expressed in the requirement artifacts precisely and comprehensively. If the software meets the requirement specification, it has been internally validated. External validation happens when it is performed by asking the stakeholders if the software meets their needs. Different software development methodologies call for different levels of user and stakeholder involvement and feedback; so, external validation can be a discrete or a continuous event. Successful final external validation occurs when all the stakeholders accept the software product and express that it satisfies their needs. Such final external validation requires the use of anacceptance testwhich is adynamic test. However, it is also possible to perform internal static tests to find out if the software meets the requirements specification but that falls into the scope of static verification because the software is not running. Requirements should be validated before the software product as a whole is ready (the waterfall development process requires them to be perfectly defined before design starts; but iterative development processes do not require this to be so and allow their continual improvement). Examples of artifact validation: According to theCapability Maturity Model(CMMI-SW v1.1),[2] Validation during the software development process can be seen as a form of User Requirements Specification validation; and, that at the end of the development process is equivalent to Internal and/or External Software validation. Verification, from CMMI's point of view, is evidently of the artifact kind. In other words, software verification ensures that the output of each phase of the software development process effectively carries out what its corresponding input artifact specifies (requirement -> design -> software product), while software validation ensures that the software product meets the needs of all the stakeholders (therefore, the requirement specification was correctly and accurately expressed in the first place). Software verification ensures that "you built it right" and confirms that the product, as provided, fulfills the plans of the developers. Software validation ensures that "you built the right thing" and confirms that the product, as provided, fulfills the intended use and goals of the stakeholders. This article has used the strict ornarrowdefinition of verification. From a testing perspective: Both verification and validation are related to the concepts ofqualityand ofsoftware quality assurance. By themselves, verification and validation do not guarantee software quality; planning,traceability, configuration management and other aspects of software engineering are required. Within themodeling and simulation(M&S) community, the definitions of verification, validation and accreditation are similar: The definition of M&S validation focuses on the accuracy with which the M&S represents the real-world intended use(s). Determining the degree of M&S accuracy is required because all M&S are approximations of reality, and it is usually critical to determine if the degree of approximation is acceptable for the intended use(s). This stands in contrast to software validation. Inmission-criticalsoftware systems,formal methodsmay be used to ensure the correct operation of a system. These formal methods can prove costly, however, representing as much as 80 percent of total software design cost. Independent Software Verification and Validation (ISVV)is targeted at safety-criticalsoftwaresystems and aims to increase the quality of software products, thereby reducing risks and costs throughout the operational life of the software. The goal of ISVV is to provide assurance that software performs to the specified level of confidence and within its designed parameters and defined requirements.[4][5] ISVV activities are performed by independent engineering teams, not involved in the software development process, to assess the processes and the resulting products. The ISVV team independency is performed at three different levels: financial, managerial and technical. ISVV goes beyond "traditional" verification and validation techniques, applied by development teams. While the latter aims to ensure that the software performs well against the nominal requirements, ISVV is focused on non-functional requirements such as robustness and reliability, and on conditions that can lead the software to fail. ISVV results and findings are fed back to the development teams for correction and improvement. ISVV derives from the application of IV&V (Independent Verification and Validation) to the software. Early ISVV application (as known today) dates back to the early 1970s when theU.S. Armysponsored the first significant program related to IV&V for the SafeguardAnti-Ballistic MissileSystem.[6]Another example is NASA's IV&V Program, which was established in 1993.[7] By the end of the 1970s IV&V was rapidly becoming popular. The constant increase in complexity, size and importance of the software led to an increasing demand on IV&V applied to software. Meanwhile, IV&V (and ISVV for software systems) consolidated and is now widely used by organizations such as theDoD,FAA,[8]NASA[7]andESA.[9]IV&V is mentioned inDO-178B,ISO/IEC 12207and formalized inIEEE 1012. Initially in 2004-2005, a European consortium led by theEuropean Space Agency, and composed ofDNV,Critical Software SA,TermaandCODA SciSys plccreated the first version of a guide devoted to ISVV, called "ESA Guide for Independent Verification and Validation" with support from other organizations.[10]This guide covers the methodologies applicable to all the software engineering phases in what concerns ISVV. In 2008 the European Space Agency released a second version, having received inputs from many different European Space ISVV stakeholders.[10] ISVV is usually composed of five principal phases, these phases can be executed sequentially or as results of a tailoring process. Software often must meet the compliance requirements of legally regulated industries, which is often guided by government agencies[11][12]or industrial administrative authorities. For instance, theFDArequires software versions andpatchesto be validated.[13]
https://en.wikipedia.org/wiki/Verification_and_validation_(software)
Network codinghas been shown to optimally usebandwidthin a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution ofnetwork packetsspreads quickly since the output of (even an) honest node is corrupted if at least one of the incoming packets is corrupted. An attacker can easily corrupt a packet even if it is encrypted by either forging the signature or by producing a collision under thehash function. This will give an attacker access to the packets and the ability to corrupt them. Denis Charles, Kamal Jain and Kristin Lauter designed a newhomomorphic encryptionsignature scheme for use with network coding to prevent pollution attacks.[1] The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. In this scheme it is computationally infeasible for a node to sign a linear combination of the packets without disclosing whatlinear combinationwas used in the generation of the packet. Furthermore, we can prove that the signature scheme is secure under well knowncryptographicassumptions of the hardness of thediscrete logarithmproblem and the computationalElliptic curve Diffie–Hellman. LetG=(V,E){\displaystyle G=(V,E)}be adirected graphwhereV{\displaystyle V}is a set, whose elements are called vertices ornodes, andE{\displaystyle E}is a set ofordered pairsof vertices, called arcs, directed edges, or arrows. A sources∈V{\displaystyle s\in V}wants to transmit a fileD{\displaystyle D}to a setT⊆V{\displaystyle T\subseteq V}of the vertices. One chooses avector spaceW/Fp{\displaystyle W/\mathbb {F} _{p}}(say of dimensiond{\displaystyle d}), wherep{\displaystyle p}is a prime, and views the data to be transmitted as a bunch of vectorsw1,…,wk∈W{\displaystyle w_{1},\ldots ,w_{k}\in W}. The source then creates the augmented vectorsv1,…,vk{\displaystyle v_{1},\ldots ,v_{k}}by settingvi=(0,…,0,1,…,0,wi1,…,wid){\displaystyle v_{i}=(0,\ldots ,0,1,\ldots ,0,w_{i_{1}},\ldots ,w{i_{d}})}wherewij{\displaystyle w_{i_{j}}}is thej{\displaystyle j}-th coordinate of the vectorwi{\displaystyle w_{i}}. There are(i−1){\displaystyle (i-1)}zeros before the first '1' appears invi{\displaystyle v_{i}}. One can assume without loss of generality that the vectorsvi{\displaystyle v_{i}}arelinearly independent. We denote thelinear subspace(ofFpk+d{\displaystyle \mathbb {F} _{p}^{k+d}}) spanned by these vectors byV{\displaystyle V}. Each outgoing edgee∈E{\displaystyle e\in E}computes a linear combination,y(e){\displaystyle y(e)}, of the vectors entering the vertexv=in(e){\displaystyle v=in(e)}where the edge originates, that is to say whereme(f)∈Fp{\displaystyle m_{e}(f)\in \mathbb {F} _{p}}. We consider the source as havingk{\displaystyle k}input edges carrying thek{\displaystyle k}vectorswi{\displaystyle w_{i}}. Byinduction, one has that the vectory(e){\displaystyle y(e)}on any edge is a linear combinationy(e)=∑1≤i≤k(gi(e)vi){\displaystyle y(e)=\sum _{1\leq i\leq k}(g_{i}(e)v_{i})}and is a vector inV{\displaystyle V}. The k-dimensional vectorg(e)=(g1(e),…,gk(e)){\displaystyle g(e)=(g_{1}(e),\ldots ,g_{k}(e))}is simply the firstkcoordinates of the vectory(e){\displaystyle y(e)}. We call thematrixwhose rows are the vectorsg(e1),…,g(ek){\displaystyle g(e_{1}),\ldots ,g(e_{k})}, whereei{\displaystyle e_{i}}are the incoming edges for a vertext∈T{\displaystyle t\in T}, the global encoding matrix fort{\displaystyle t}and denote it asGt{\displaystyle G_{t}}. In practice the encoding vectors are chosen at random so the matrixGt{\displaystyle G_{t}}is invertible with high probability. Thus, any receiver, on receivingy1,…,yk{\displaystyle y_{1},\ldots ,y_{k}}can findw1,…,wk{\displaystyle w_{1},\ldots ,w_{k}}by solving where theyi′{\displaystyle y_{i}'}are the vectors formed by removing the firstk{\displaystyle k}coordinates of the vectoryi{\displaystyle y_{i}}. Eachreceiver,t∈T{\displaystyle t\in T}, getsk{\displaystyle k}vectorsy1,…,yk{\displaystyle y_{1},\ldots ,y_{k}}which are random linear combinations of thevi{\displaystyle v_{i}}’s. In fact, if then Thus we can invert the linear transformation to find thevi{\displaystyle v_{i}}’s with highprobability. Krohn, Freedman and Mazieres proposed a theory[2]in 2004 that if we have a hash functionH:V⟶G{\displaystyle H:V\longrightarrow G}such that: Then server can securely distributeH(vi){\displaystyle H(v_{i})}to each receiver, and to check if we can check whether The problem with this method is that the server needs to transfer secure information to each of the receivers. The hash functionsH{\displaystyle H}needs to be transmitted to all the nodes in the network through a separate secure channel.H{\displaystyle H}is expensive to compute and secure transmission ofH{\displaystyle H}is not economical either. The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. Elliptic curve cryptographyover a finite field is an approach topublic-key cryptographybased on the algebraic structure ofelliptic curvesoverfinite fields. LetFq{\displaystyle \mathbb {F} _{q}}be a finite field such thatq{\displaystyle q}is not a power of 2 or 3. Then an elliptic curveE{\displaystyle E}overFq{\displaystyle \mathbb {F} _{q}}is a curve given by an equation of the form wherea,b∈Fq{\displaystyle a,b\in \mathbb {F} _{q}}such that4a3+27b2≠0{\displaystyle 4a^{3}+27b^{2}\not =0} LetK⊇Fq{\displaystyle K\supseteq \mathbb {F} _{q}}, then, forms anabelian groupwith O as identity. Thegroup operationscan be performed efficiently. Weil pairingis a construction ofroots of unityby means of functions on anelliptic curveE{\displaystyle E}, in such a way as to constitute apairing(bilinear form, though withmultiplicative notation) on thetorsion subgroupofE{\displaystyle E}. LetE/Fq{\displaystyle E/\mathbb {F} _{q}}be an elliptic curve and letF¯q{\displaystyle \mathbb {\bar {F}} _{q}}be an algebraic closure ofFq{\displaystyle \mathbb {F} _{q}}. Ifm{\displaystyle m}is an integer, relatively prime to the characteristic of the fieldFq{\displaystyle \mathbb {F} _{q}}, then the group ofm{\displaystyle m}-torsion points,E[m]=P∈E(F¯q):mP=O{\displaystyle E[m]={P\in E(\mathbb {\bar {F}} _{q}):mP=O}}. IfE/Fq{\displaystyle E/\mathbb {F} _{q}}is an elliptic curve andgcd(m,q)=1{\displaystyle \gcd(m,q)=1}then There is a mapem:E[m]∗E[m]→μm(Fq){\displaystyle e_{m}:E[m]*E[m]\rightarrow \mu _{m}(\mathbb {F} _{q})}such that: Also,em{\displaystyle e_{m}}can be computed efficiently.[3] Letp{\displaystyle p}be a prime andq{\displaystyle q}a prime power. LetV/Fp{\displaystyle V/\mathbb {F} _{p}}be a vector space of dimensionD{\displaystyle D}andE/Fq{\displaystyle E/\mathbb {F} _{q}}be an elliptic curve such thatP1,…,PD∈E[p]{\displaystyle P_{1},\ldots ,P_{D}\in E[p]}. Defineh:V⟶E[p]{\displaystyle h:V\longrightarrow E[p]}as follows:h(u1,…,uD)=∑1≤i≤D(uiPi){\displaystyle h(u_{1},\ldots ,u_{D})=\sum _{1\leq i\leq D}(u_{i}P_{i})}. The functionh{\displaystyle h}is an arbitrary homomorphism fromV{\displaystyle V}toE[p]{\displaystyle E[p]}. The server choosess1,…,sD{\displaystyle s_{1},\ldots ,s_{D}}secretly inFp{\displaystyle \mathbb {F} _{p}}and publishes a pointQ{\displaystyle Q}of p-torsion such thatep(Pi,Q)≠1{\displaystyle e_{p}(P_{i},Q)\not =1}and also publishes(Pi,siQ){\displaystyle (P_{i},s_{i}Q)}for1≤i≤D{\displaystyle 1\leq i\leq D}. The signature of the vectorv=u1,…,uD{\displaystyle v=u_{1},\ldots ,u_{D}}isσ(v)=∑1≤i≤D(uisiPi){\displaystyle \sigma (v)=\sum _{1\leq i\leq D}(u_{i}s_{i}P_{i})}Note: This signature is homomorphic since the computation of h is a homomorphism. Givenv=u1,…,uD{\displaystyle v=u_{1},\ldots ,u_{D}}and its signatureσ{\displaystyle \sigma }, verify that The verification crucially uses the bilinearity of the Weil-pairing. The server computesσ(vi){\displaystyle \sigma (v_{i})}for each1≤i≤k{\displaystyle 1\leq i\leq k}. Transmitsvi,σ(vi){\displaystyle v_{i},\sigma (v_{i})}. At each edgee{\displaystyle e}while computingy(e)=∑f∈E:out(f)=in(e)(me(f)y(f)){\displaystyle y(e)=\sum _{f\in E:\mathrm {out} (f)=\mathrm {in} (e)}(m_{e}(f)y(f))}also computeσ(y(e))=∑f∈E:out(f)=in(e)(me(f)σ(y(f))){\displaystyle \sigma (y(e))=\sum _{f\in E:\mathrm {out} (f)=\mathrm {in} (e)}(m_{e}(f)\sigma (y(f)))}on the elliptic curveE{\displaystyle E}. The signature is a point on the elliptic curve with coordinates inFq{\displaystyle \mathbb {F} _{q}}. Thus the size of the signature is2log⁡q{\displaystyle 2\log q}bits (which is some constant timeslog(p){\displaystyle log(p)}bits, depending on the relative size ofp{\displaystyle p}andq{\displaystyle q}), and this is the transmission overhead. The computation of the signatureh(e){\displaystyle h(e)}at each vertex requiresO(dinlog⁡plog1+ϵ⁡q){\displaystyle O(d_{in}\log p\log ^{1+\epsilon }q)}bit operations, wheredin{\displaystyle d_{in}}is the in-degree of the vertexin(e){\displaystyle in(e)}. The verification of a signature requiresO((d+k)log2+ϵ⁡q){\displaystyle O((d+k)\log ^{2+\epsilon }q)}bit operations. Attacker can produce a collision under the hash function. If given(P1,…,Pr){\displaystyle (P_{1},\ldots ,P_{r})}points inE[p]{\displaystyle E[p]}finda=(a1,…,ar)∈Fpr{\displaystyle a=(a_{1},\ldots ,a_{r})\in \mathbb {F} _{p}^{r}}andb=(b1,…,br)∈Fpr{\displaystyle b=(b_{1},\ldots ,b_{r})\in \mathbb {F} _{p}^{r}} such thata≠b{\displaystyle a\not =b}and Proposition: There is a polynomial time reduction from discrete log on thecyclic groupof orderp{\displaystyle p}on elliptic curves to Hash-Collision. Ifr=2{\displaystyle r=2}, then we getxP+yQ=uP+vQ{\displaystyle xP+yQ=uP+vQ}. Thus(x−u)P+(y−v)Q=0{\displaystyle (x-u)P+(y-v)Q=0}. We claim thatx≠u{\displaystyle x\not =u}andy≠v{\displaystyle y\not =v}. Suppose thatx=u{\displaystyle x=u}, then we would have(y−v)Q=0{\displaystyle (y-v)Q=0}, butQ{\displaystyle Q}is a point of orderp{\displaystyle p}(a prime) thusy−u≡0modp{\displaystyle y-u\equiv 0{\bmod {p}}}. In other wordsy=v{\displaystyle y=v}inFp{\displaystyle \mathbb {F} _{p}}. This contradicts the assumption that(x,y){\displaystyle (x,y)}and(u,v){\displaystyle (u,v)}are distinct pairs inF2{\displaystyle \mathbb {F} _{2}}. Thus we have thatQ=−(x−u)(y−v)−1P{\displaystyle Q=-(x-u)(y-v)^{-1}P}, where the inverse is taken as modulop{\displaystyle p}. If we have r > 2 then we can do one of two things. Either we can takeP1=P{\displaystyle P_{1}=P}andP2=Q{\displaystyle P_{2}=Q}as before and setPi=O{\displaystyle P_{i}=O}fori{\displaystyle i}> 2 (in this case the proof reduces to the case whenr=2{\displaystyle r=2}), or we can takeP1=r1P{\displaystyle P_{1}=r_{1}P}andPi=riQ{\displaystyle P_{i}=r_{i}Q}whereri{\displaystyle r_{i}}are chosen at random fromFp{\displaystyle \mathbb {F} _{p}}. We get one equation in one unknown (the discrete log ofQ{\displaystyle Q}). It is quite possible that the equation we get does not involve the unknown. However, this happens with very small probability as we argue next. Suppose the algorithm for Hash-Collision gave us that Then as long as∑2≤i≤rbiri≢0modp{\displaystyle \sum _{2\leq i\leq r}b_{i}r_{i}\not \equiv 0{\bmod {p}}}, we can solve for the discrete log of Q. But theri{\displaystyle r_{i}}’s are unknown to the oracle for Hash-Collision and so we can interchange the order in which this process occurs. In other words, givenbi{\displaystyle b_{i}}, for2≤i≤r{\displaystyle 2\leq i\leq r}, not all zero, what is the probability that theri{\displaystyle r_{i}}’s we chose satisfies∑2≤i≤r(biri)=0{\displaystyle \sum _{2\leq i\leq r}(b_{i}r_{i})=0}? It is clear that the latter probability is1p{\displaystyle 1 \over p}. Thus with high probability we can solve for the discrete log ofQ{\displaystyle Q}. We have shown that producing hash collisions in this scheme is difficult. The other method by which an adversary can foil our system is by forging a signature. This scheme for the signature is essentially the Aggregate Signature version of the Boneh-Lynn-Shacham signature scheme.[4]Here it is shown that forging a signature is at least as hard as solving theelliptic curve Diffie–Hellmanproblem. The only known way to solve this problem on elliptic curves is via computing discrete-logs. Thus forging a signature is at least as hard as solving the computational co-Diffie–Hellman on elliptic curves and probably as hard as computing discrete-logs.
https://en.wikipedia.org/wiki/Homomorphic_signatures_for_network_coding
Indigital logic, adon't-care term[1][2](abbreviatedDC, historically also known asredundancies,[2]irrelevancies,[2]optional entries,[3][4]invalid combinations,[5][4][6]vacuous combinations,[7][4]forbidden combinations,[8][2]unused statesorlogical remainders[9]) for a function is an input-sequence (a series of bits) for which the function output does not matter. An input that is known never to occur is acan't-happen term.[10][11][12][13]Both these types of conditions are treated the same way in logic design and may be referred to collectively asdon't-care conditionsfor brevity.[14]The designer of a logic circuit to implement the function need not care about such inputs, but can choose the circuit's output arbitrarily, usually such that the simplest, smallest, fastest or cheapest circuit results (minimization) or the power-consumption is minimized.[15][16] Don't-care terms are important to consider in minimizing logic circuit design, including graphical methods likeKarnaugh–Veitch mapsand algebraic methods such as theQuine–McCluskey algorithm. In 1958,Seymour Ginsburgproved that minimization of states of afinite-state machinewith don't-care conditions does not necessarily yield a minimization of logic elements. Direct minimization of logic elements in such circuits was computationally impractical (for large systems) with the computing power available to Ginsburg in 1958.[17] Examples of don't-care terms are the binary values 1010 through 1111 (10 through 15 in decimal) for a function that takes abinary-coded decimal(BCD) value, because a BCD value never takes on such values (so calledpseudo-tetrades); in the pictures, the circuit computing the lower left bar of a7-segment displaycan be minimized toab+acby an appropriate choice of circuit outputs fordcba= 1010…1111. Write-only registers, as frequently found in older hardware, are often a consequence of don't-care optimizations in the trade-off between functionality and the number of necessary logic gates.[18] Don't-care states can also occur inencoding schemesandcommunication protocols.[nb 1] "Don't care" may also refer to an unknown value in amulti-valued logicsystem, in which case it may also be called anX valueordon't know.[19]In theVeriloghardware description languagesuch values are denoted by the letter "X". In theVHDLhardware description language such values are denoted (in the standard logic package) by the letter "X" (forced unknown) or the letter "W" (weak unknown).[20] An X value does not exist in hardware. In simulation, an X value can result from two or more sources driving a signal simultaneously, or the stable output of aflip-flopnot having been reached. In synthesized hardware, however, the actual value of such a signal will be either 0 or 1, but will not be determinable from the circuit's inputs.[20] Further considerations are needed for logic circuits that involve somefeedback. That is, those circuits that depend on the previous output(s) of the circuit as well as its current external inputs. Such circuits can be represented by astate machine. It is sometimes possible that some states that are nominally can't-happen conditions can accidentally be generated during power-up of the circuit or else by random interference (likecosmic radiation,electrical noiseor heat). This is also calledforbidden input.[21]In some cases, there is no combination of inputs that can exit the state machine into a normal operational state. The machine remains stuck in the power-up state or can be moved only between other can't-happen states in a walled garden of states. This is also called ahardware lockuporsoft error. Such states, while nominally can't-happen, are not don't-care, and designers take steps either to ensure that they are really made can't-happen, or else if they do happen, that they create adon't-care alarmindicating an emergency state[21]forerror detection, or they are transitory and lead to a normal operational state.[22][23][24]
https://en.wikipedia.org/wiki/Forbidden_input
Ingeometry, thefolium of Descartes(fromLatinfolium'leaf'; named forRené Descartes) is analgebraic curvedefined by theimplicit equationx3+y3−3axy=0.{\displaystyle x^{3}+y^{3}-3axy=0.} The curve was first proposed and studied byRené Descartesin 1638.[1]Its claim to fame lies in an incident in the development ofcalculus. Descartes challengedPierre de Fermatto find the tangent line to the curve at an arbitrary point since Fermat had recently discovered a method for finding tangent lines. Fermat solved the problem easily, something Descartes was unable to do.[2]Since the invention of calculus, the slope of the tangent line can be found easily usingimplicit differentiation.[3] The folium of Descartes can be expressed inpolar coordinatesasr=3asin⁡θcos⁡θsin3⁡θ+cos3⁡θ,{\displaystyle r={\frac {3a\sin \theta \cos \theta }{\sin ^{3}\theta +\cos ^{3}\theta }},}which is plotted on the left. This is equivalent to[4] r=3asec⁡θtan⁡θ1+tan3⁡θ.{\displaystyle r={\frac {3a\sec \theta \tan \theta }{1+\tan ^{3}\theta }}.} Another technique is to writey=px{\displaystyle y=px}and solve forx{\displaystyle x}andy{\displaystyle y}in terms ofp{\displaystyle p}. This yields therationalparametric equations:[5] We can see that the parameter is related to the position on the curve as follows: Another way of plotting the function can be derived from symmetry overy=x{\displaystyle y=x}. The symmetry can be seen directly from its equation (x and y can be interchanged). By applying rotation of 45° CW for example, one can plot the function symmetric over rotated x axis. This operation is equivalent to a substitution:x=u+v2,y=u−v2{\displaystyle x={{u+v} \over {\sqrt {2}}},\,y={{u-v} \over {\sqrt {2}}}}and yieldsv=±u3a2−2u6u+3a2.{\displaystyle v=\pm u{\sqrt {\frac {3a{\sqrt {2}}-2u}{6u+3a{\sqrt {2}}}}}.}Plotting in the Cartesian system of(u,v){\displaystyle (u,v)}gives the folium rotated by 45° and therefore symmetric byu{\displaystyle u}-axis. It forms a loop in the first quadrant with adouble pointat the origin andasymptotex+y+a=0.{\displaystyle x+y+a=0\,.}It is symmetrical about the liney=x{\displaystyle y=x}. As such, the two intersect at the origin and at the point(3a/2,3a/2){\displaystyle (3a/2,3a/2)}. Implicit differentiation gives the formula for the slope of the tangent line to this curve to be[3]dydx=ay−x2y2−ax.{\displaystyle {\frac {dy}{dx}}={\frac {ay-x^{2}}{y^{2}-ax}}.}Using either one of the polar representations above, the area of the interior of the loop is found to be3a2/2{\displaystyle 3a^{2}/2}. Moreover, the area between the "wings" of the curve and its slanted asymptote is also3a2/2{\displaystyle 3a^{2}/2}.[1] The folium of Descartes is related to thetrisectrix of Maclaurinbyaffine transformation. To see this, start with the equationx3+y3=3axy,{\displaystyle x^{3}+y^{3}=3axy\,,}and change variables to find the equation in a coordinate system rotated 45 degrees. This amounts to setting x=X+Y2,y=X−Y2.{\displaystyle x={{X+Y} \over {\sqrt {2}}},y={{X-Y} \over {\sqrt {2}}}.}In theX,Y{\displaystyle X,Y}plane the equation is2X(X2+3Y2)=32a(X2−Y2).{\displaystyle 2X(X^{2}+3Y^{2})=3{\sqrt {2}}a(X^{2}-Y^{2}).} If we stretch the curve in theY{\displaystyle Y}direction by a factor of3{\displaystyle {\sqrt {3}}}this becomes2X(X2+Y2)=a2(3X2−Y2),{\displaystyle 2X(X^{2}+Y^{2})=a{\sqrt {2}}(3X^{2}-Y^{2}),}which is the equation of the trisectrix of Maclaurin.
https://en.wikipedia.org/wiki/Folium_of_Descartes
MapReduceis aprogramming modeland an associated implementation for processing and generatingbig datasets with aparallelanddistributedalgorithm on acluster.[1][2][3] A MapReduce program is composed of amapprocedure, which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and areducemethod, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing bymarshallingthe distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing forredundancyandfault tolerance. The model is a specialization of thesplit-apply-combinestrategy for data analysis.[4]It is inspired by themapandreducefunctions commonly used infunctional programming,[5]although their purpose in the MapReduce framework is not the same as in their original forms.[6]The key contributions of the MapReduce framework are not the actual map and reduce functions (which, for example, resemble the 1995Message Passing Interfacestandard's[7]reduce[8]andscatter[9]operations), but the scalability and fault-tolerance achieved for a variety of applications due to parallelization. As such, asingle-threadedimplementation of MapReduce is usually not faster than a traditional (non-MapReduce) implementation; any gains are usually only seen withmulti-threadedimplementations on multi-processor hardware.[10]The use of this model is beneficial only when the optimized distributed shuffle operation (which reduces network communication cost) and fault tolerance features of the MapReduce framework come into play. Optimizing the communication cost is essential to a good MapReduce algorithm.[11] MapReducelibrarieshave been written in many programming languages, with different levels of optimization. A popularopen-sourceimplementation that has support for distributed shuffles is part ofApache Hadoop. The name MapReduce originally referred to the proprietaryGoogletechnology, but has since become ageneric trademark. By 2014, Google was no longer using MapReduce as its primarybig dataprocessing model,[12]and development onApache Mahouthad moved on to more capable and less disk-oriented mechanisms that incorporated full map and reduce capabilities.[13] MapReduce is a framework for processingparallelizableproblems across large datasets using a large number of computers (nodes), collectively referred to as acluster(if all nodes are on the same local network and use similar hardware) or agrid(if the nodes are shared across geographically and administratively distributed systems, and use more heterogeneous hardware). Processing can occur on data stored either in afilesystem(unstructured) or in adatabase(structured). MapReduce can take advantage of the locality of data, processing it near the place it is stored in order to minimize communication overhead. A MapReduce framework (or system) is usually composed of three operations (or steps): MapReduce allows for the distributed processing of the map and reduction operations. Maps can be performed in parallel, provided that each mapping operation is independent of the others; in practice, this is limited by the number of independent data sources and/or the number of CPUs near each source. Similarly, a set of 'reducers' can perform the reduction phase, provided that all outputs of the map operation that share the same key are presented to the same reducer at the same time, or that the reduction function isassociative. While this process often appears inefficient compared to algorithms that are more sequential (because multiple instances of the reduction process must be run), MapReduce can be applied to significantly larger datasets than a single"commodity" servercan handle – a largeserver farmcan use MapReduce to sort apetabyteof data in only a few hours.[14]The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails, the work can be rescheduled – assuming the input data are still available. Another way to look at MapReduce is as a 5-step parallel and distributed computation: These five steps can be logically thought of as running in sequence – each step starts only after the previous step is completed – although in practice they can be interleaved as long as the final result is not affected. In many situations, the input data might have already been distributed ("sharded") among many different servers, in which case step 1 could sometimes be greatly simplified by assigning Map servers that would process the locally present input data. Similarly, step 3 could sometimes be sped up by assigning Reduce processors that are as close as possible to the Map-generated data they need to process. TheMapandReducefunctions ofMapReduceare both defined with respect to data structured in (key, value) pairs.Maptakes one pair of data with a type in onedata domain, and returns a list of pairs in a different domain: Map(k1,v1)→list(k2,v2) TheMapfunction is applied in parallel to every pair (keyed byk1) in the input dataset. This produces a list of pairs (keyed byk2) for each call. After that, the MapReduce framework collects all pairs with the same key (k2) from all lists and groups them together, creating one group for each key. TheReducefunction is then applied in parallel to each group, which in turn produces a collection of values in the same domain: Reduce(k2, list (v2))→list((k3, v3))[15] EachReducecall typically produces either one key value pair or an empty return, though one call is allowed to return more than one key value pair. The returns of all calls are collected as the desired result list. Thus the MapReduce framework transforms a list of (key, value) pairs into another list of (key, value) pairs.[16]This behavior is different from the typical functional programming map and reduce combination, which accepts a list of arbitrary values and returns one single value that combinesallthe values returned by map. It isnecessary but not sufficientto have implementations of the map and reduce abstractions in order to implement MapReduce. Distributed implementations of MapReduce require a means of connecting the processes performing the Map and Reduce phases. This may be adistributed file system. Other options are possible, such as direct streaming from mappers to reducers, or for the mapping processors to serve up their results to reducers that query them. The canonical MapReduce example counts the appearance of each word in a set of documents:[17] Here, each document is split into words, and each word is counted by themapfunction, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call toreduce. Thus, this function just needs to sum all of its input values to find the total appearances of that word. As another example, imagine that for a database of 1.1 billion people, one would like to compute the average number of social contacts a person has according to age. InSQL, such a query could be expressed as: Using MapReduce, theK1key values could be the integers 1 through 1100, each representing a batch of 1 million records, theK2key value could be a person's age in years, and this computation could be achieved using the following functions: Note that in theReducefunction,Cis the count of people having in total N contacts, so in theMapfunction it is natural to writeC=1, since every output pair is referring to the contacts of one single person. The MapReduce system would line up the 1100 Map processors, and would provide each with its corresponding 1 million input records. The Map step would produce 1.1 billion(Y,(N,1))records, withYvalues ranging between, say, 8 and 103. The MapReduce System would then line up the 96 Reduce processors by performing shuffling operation of the key/value pairs due to the fact that we need average per age, and provide each with its millions of corresponding input records. The Reduce step would result in the much reduced set of only 96 output records(Y,A), which would be put in the final result file, sorted byY. The count info in the record is important if the processing is reduced more than one time. If we did not add the count of the records, the computed average would be wrong, for example: If we reduce files#1and#2, we will have a new file with an average of 9 contacts for a 10-year-old person ((9+9+9+9+9)/5): If we reduce it with file#3, we lose the count of how many records we've already seen, so we end up with an average of 9.5 contacts for a 10-year-old person ((9+10)/2), which is wrong. The correct answer is 9.166= 55 / 6 = (9×3+9×2+10×1)/(3+2+1). Software framework architectureadheres toopen-closed principlewhere code is effectively divided into unmodifiablefrozen spotsandextensiblehot spots. The frozen spot of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are: Theinput readerdivides the input into appropriate size 'splits' (in practice, typically, 64 MB to 128 MB) and the framework assigns one split to eachMapfunction. Theinput readerreads data from stable storage (typically, adistributed file system) and generates key/value pairs. A common example will read a directory full of text files and return each line as a record. TheMapfunction takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other. If the application is doing a word count, the map function would break the line into words and output a key/value pair for each word. Each output pair would contain the word as the key and the number of instances of that word in the line as the value. EachMapfunction output is allocated to a particularreducerby the application'spartitionfunction forshardingpurposes. Thepartitionfunction is given the key and the number of reducers and returns the index of the desiredreducer. A typical default is tohashthe key and use the hash valuemodulothe number ofreducers. It is important to pick a partition function that gives an approximately uniform distribution of data per shard forload-balancingpurposes, otherwise the MapReduce operation can be held up waiting for slow reducers to finish (i.e. the reducers assigned the larger shares of the non-uniformly partitioned data). Between the map and reduce stages, the data areshuffled(parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced them to the shard in which they will be reduced. The shuffle can sometimes take longer than the computation time depending on network bandwidth, CPU speeds, data produced and time taken by map and reduce computations. The input for eachReduceis pulled from the machine where theMapran and sorted using the application'scomparisonfunction. The framework calls the application'sReducefunction once for each unique key in the sorted order. TheReducecan iterate through the values that are associated with that key and produce zero or more outputs. In the word count example, theReducefunction takes the input values, sums them and generates a single output of the word and the final sum. TheOutput Writerwrites the output of theReduceto the stable storage. Properties ofmonoidsare the basis for ensuring the validity of MapReduce operations.[18][19] In the Algebird package[20]a Scala implementation of Map/Reduce explicitly requires a monoid class type .[21] The operations of MapReduce deal with two types: the typeAof input data being mapped, and the typeBof output data being reduced. TheMapoperation takes individual values of typeAand produces, for eacha:Aa valueb:B; TheReduceoperation requires a binary operation • defined on values of typeB; it consists of folding all availableb:Bto a single value. From a basic requirements point of view, any MapReduce operation must involve the ability to arbitrarily regroup data being reduced. Such a requirement amounts to two properties of the operation •: The second property guarantees that, when parallelized over multiple nodes, the nodes that don't have any data to process would have no impact on the result. These two properties amount to having amonoid(B, •,e) on values of typeBwith operation • and with neutral elemente. There's no requirements on the values of typeA; an arbitrary functionA→Bcan be used for theMapoperation. This means that we have acatamorphismA*→ (B, •,e). HereA*denotes aKleene star, also known as the type of lists overA. TheShuffleoperation per se is not related to the essence of MapReduce; it's needed to distribute calculations over the cloud. It follows from the above that not every binaryReduceoperation will work in MapReduce. Here are the counter-examples: MapReduce programs are not guaranteed to be fast. The main benefit of this programming model is to exploit the optimized shuffle operation of the platform, and only having to write theMapandReduceparts of the program. In practice, the author of a MapReduce program however has to take the shuffle step into consideration; in particular the partition function and the amount of data written by theMapfunction can have a large impact on the performance and scalability. Additional modules such as theCombinerfunction can help to reduce the amount of data written to disk, and transmitted over the network. MapReduce applications can achieve sub-linear speedups under specific circumstances.[22] When designing a MapReduce algorithm, the author needs to choose a good tradeoff[11]between the computation and the communication costs. Communication cost often dominates the computation cost,[11][22]and many MapReduce implementations are designed to write all communication to distributed storage for crash recovery. In tuning performance of MapReduce, the complexity of mapping, shuffle, sorting (grouping by the key), and reducing has to be taken into account. The amount of data produced by the mappers is a key parameter that shifts the bulk of the computation cost between mapping and reducing. Reducing includes sorting (grouping of the keys) which has nonlinear complexity. Hence, small partition sizes reduce sorting time, but there is a trade-off because having a large number of reducers may be impractical. The influence of split unit size is marginal (unless chosen particularly badly, say <1MB). The gains from some mappers reading load from local disks, on average, is minor.[23] For processes that complete quickly, and where the data fits into main memory of a single machine or a small cluster, using a MapReduce framework usually is not effective. Since these frameworks are designed to recover from the loss of whole nodes during the computation, they write interim results to distributed storage. This crash recovery is expensive, and only pays off when the computation involves many computers and a long runtime of the computation. A task that completes in seconds can just be restarted in the case of an error, and the likelihood of at least one machine failing grows quickly with the cluster size. On such problems, implementations keeping all data in memory and simply restarting a computation on node failures or —when the data is small enough— non-distributed solutions will often be faster than a MapReduce system. MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network. Each node is expected to report back periodically with completed work and status updates. If a node falls silent for longer than that interval, the master node (similar to the master server in theGoogle File System) records the node as dead and sends out the node's assigned work to other nodes. Individual operations useatomicoperations for naming file outputs as a check to ensure that there are not parallel conflicting threads running. When files are renamed, it is possible to also copy them to another name in addition to the name of the task (allowing forside-effects). The reduce operations operate much the same way. Because of their inferior properties with regard to parallel operations, the master node attempts to schedule reduce operations on the same node, or in the same rack as the node holding the data being operated on. This property is desirable as it conserves bandwidth across the backbone network of the datacenter. Implementations are not necessarily highly reliable. For example, in older versions ofHadooptheNameNodewas asingle point of failurefor the distributed filesystem. Later versions of Hadoop have high availability with an active/passive failover for the "NameNode." MapReduce is useful in a wide range of applications, including distributed pattern-based searching, distributed sorting, web link-graph reversal, Singular Value Decomposition,[24]web access log stats,inverted indexconstruction,document clustering,machine learning,[25]andstatistical machine translation. Moreover, the MapReduce model has been adapted to several computing environments like multi-core and many-core systems,[26][27][28]desktop grids,[29]multi-cluster,[30]volunteer computing environments,[31]dynamic cloud environments,[32]mobile environments,[33]and high-performance computing environments.[34] At Google, MapReduce was used to completely regenerate Google's index of theWorld Wide Web. It replaced the oldad hocprograms that updated the index and ran the various analyses.[35]Development at Google has since moved on to technologies such as Percolator, FlumeJava[36]andMillWheelthat offer streaming operation and updates instead of batch processing, to allow integrating "live" search results without rebuilding the complete index.[37] MapReduce's stable inputs and outputs are usually stored in adistributed file system. The transient data are usually stored on local disk and fetched remotely by the reducers. David DeWittandMichael Stonebraker, computer scientists specializing inparallel databasesandshared-nothing architectures, have been critical of the breadth of problems that MapReduce can be used for.[38]They called its interface too low-level and questioned whether it really represents theparadigm shiftits proponents have claimed it is.[39]They challenged the MapReduce proponents' claims of novelty, citingTeradataas an example ofprior artthat has existed for over two decades. They also compared MapReduce programmers toCODASYLprogrammers, noting both are "writing in alow-level languageperforming low-level record manipulation."[39]MapReduce's use of input files and lack ofschemasupport prevents the performance improvements enabled by common database system features such asB-treesandhash partitioning, though projects such asPig (or PigLatin),Sawzall,Apache Hive,[40]HBase[41]andBigtable[41][42]are addressing some of these problems. Greg Jorgensen wrote an article rejecting these views.[43]Jorgensen asserts that DeWitt and Stonebraker's entire analysis is groundless as MapReduce was never designed nor intended to be used as a database. DeWitt and Stonebraker have subsequently published a detailed benchmark study in 2009 comparing performance ofHadoop'sMapReduce andRDBMSapproaches on several specific problems.[44]They concluded that relational databases offer real advantages for many kinds of data use, especially on complex processing or where the data is used across an enterprise, but that MapReduce may be easier for users to adopt for simple or one-time processing tasks. The MapReduce programming paradigm was also described inDanny Hillis's 1985 thesis[45]intended for use on theConnection Machine, where it was called "xapping/reduction"[46]and relied upon that machine's special hardware to accelerate both map and reduce. The dialect ultimately used for the Connection Machine, the 1986StarLisp, had parallel*mapandreduce!!,[47]which in turn was based on the 1984Common Lisp, which had non-parallelmapandreducebuilt in.[48]Thetree-likeapproach that the Connection Machine'shypercube architectureuses to executereduceinO(log⁡n){\displaystyle O(\log n)}time[49]is effectively the same as the approach referred to within the Google paper as prior work.[3]:11 In 2010 Google was granted what is described as a patent on MapReduce. The patent, filed in 2004, may cover use of MapReduce by open source software such asHadoop,CouchDB, and others. InArs Technica, an editor acknowledged Google's role in popularizing the MapReduce concept, but questioned whether the patent was valid or novel.[50][51]In 2013, as part of its "Open Patent Non-Assertion (OPN) Pledge", Google pledged to only use the patent defensively.[52][53]The patent is expected to expire on 23 December 2026.[54] MapReduce tasks must be written as acyclic dataflow programs, i.e. a stateless mapper followed by a stateless reducer, that are executed by a batch job scheduler. This paradigm makes repeated querying of datasets difficult and imposes limitations that are felt in fields such asgraphprocessing[55]where iterative algorithms that revisit a singleworking setmultiple times are the norm, as well as, in the presence ofdisk-based data with highlatency, even the field ofmachine learningwhere multiple passes through the data are required even though algorithms can tolerate serial access to the data each pass.[56]
https://en.wikipedia.org/wiki/MapReduce
Non-well-founded set theoriesare variants ofaxiomatic set theorythat allow sets to be elements of themselves and otherwise violate the rule ofwell-foundedness. In non-well-founded set theories, thefoundation axiomofZFCis replaced by axioms implying its negation. The study of non-well-founded sets was initiated byDmitry Mirimanoffin a series of papers between 1917 and 1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regard well-foundedness as anaxiom. Although a number of axiomatic systems of non-well-founded sets were proposed afterwards, they did not find much in the way of applications until the book Non-Well-Founded Sets byPeter Aczelintroduceshyperset theoryin 1988.[1][2][3] The theory of non-well-founded sets has been applied in thelogicalmodellingof non-terminatingcomputationalprocesses in computer science (process algebraandfinal semantics),linguisticsandnatural languagesemantics(situation theory), philosophy (work on theLiar Paradox), and in a different setting,non-standard analysis.[4] In 1917, Dmitry Mirimanoff introduced[5][6][7][8]the concept ofwell-foundednessof a set: In ZFC, there is no infinite descending ∈-sequence by theaxiom of regularity. In fact, the axiom of regularity is often called thefoundation axiomsince it can be proved within ZFC−(that is, ZFC without the axiom of regularity) that well-foundedness implies regularity. In variants of ZFC without theaxiom of regularity, the possibility of non-well-founded sets with set-like ∈-chains arises. For example, a setAsuch thatA∈Ais non-well-founded. Although Mirimanoff also introduced a notion of isomorphism between possibly non-well-founded sets, he considered neither an axiom of foundation nor of anti-foundation.[7]In 1926,Paul Finslerintroduced the first axiom that allowed non-well-founded sets. After Zermelo adopted Foundation into his own system in 1930 (from previous work ofvon Neumann1925–1929) interest in non-well-founded sets waned for decades.[9]An early non-well-founded set theory wasWillard Van Orman Quine’sNew Foundations, although it is not merely ZF with a replacement for Foundation. Several proofs of the independence of Foundation from the rest of ZF were published in 1950s particularly byPaul Bernays(1954), following an announcement of the result in an earlier paper of his from 1941, and byErnst Speckerwho gave a different proof in hisHabilitationsschriftof 1951, proof which was published in 1957. Then in 1957Rieger's theoremwas published, which gave a general method for such proof to be carried out, rekindling some interest in non-well-founded axiomatic systems.[10]The next axiom proposal came in a 1960 congress talk ofDana Scott(never published as a paper), proposing an alternative axiom now calledSAFA.[11]Another axiom proposed in the late 1960s wasMaurice Boffa's axiom ofsuperuniversality, described by Aczel as the highpoint of research of its decade.[12]Boffa's idea was to make foundation fail as badly as it can (or rather, as extensionality permits): Boffa's axiom implies that everyextensionalset-likerelation is isomorphic to the elementhood predicate on a transitive class. A more recent approach to non-well-founded set theory, pioneered by M. Forti and F. Honsell in the 1980s, borrows from computer science the concept of abisimulation. Bisimilar sets are considered indistinguishable and thus equal, which leads to a strengthening of theaxiom of extensionality. In this context, axioms contradicting the axiom of regularity are known asanti-foundation axioms, and a set that is not necessarily well-founded is called ahyperset. Four mutuallyindependentanti-foundation axioms are well-known, sometimes abbreviated by the first letter in the following list: They essentially correspond to four different notions of equality for non-well-founded sets. The first of these, AFA, is based onaccessible pointed graphs(apg) and states that two hypersets are equal if and only if they can be pictured by the same apg. Within this framework, it can be shown that the equationx= {x} has one and only one solution, the uniqueQuine atomof the theory. Each of the axioms given above extends the universe of the previous, so that:V⊆ A ⊆ S ⊆ F ⊆ B. In the Boffa universe, the distinct Quine atoms form a proper class.[13] It is worth emphasizing that hyperset theory is an extension of classical set theory rather than a replacement: the well-founded sets within a hyperset domain conform to classical set theory. In published research, non-well-founded sets are also called hypersets, in parallel to thehyperreal numbersofnonstandard analysis.[14][15] The hypersets were extensively used byJon BarwiseandJohn Etchemendyin their 1987 bookThe Liar, on theliar's paradox. The book's proposals contributed to thetheory of truth.[14]The book is also a good introduction to the topic of non-well-founded sets.[14]
https://en.wikipedia.org/wiki/Non-well-founded_set_theory
Inalgebraandnumber theory,Wilson's theoremstates that anatural numbern> 1 is aprime numberif and only ifthe product of all thepositive integersless thannis one less than a multiple ofn. That is (using the notations ofmodular arithmetic), thefactorial(n−1)!=1×2×3×⋯×(n−1){\displaystyle (n-1)!=1\times 2\times 3\times \cdots \times (n-1)}satisfies exactly whennis a prime number. In other words, any integern> 1 is a prime number if, and only if, (n− 1)! + 1 is divisible byn.[1] The theorem was first stated byIbn al-Haythamc.1000 AD.[2]Edward Waringannounced the theorem in 1770 without proving it, crediting his studentJohn Wilsonfor the discovery.[3]Lagrangegave the first proof in 1771.[4]There is evidence thatLeibnizwas also aware of the result a century earlier, but never published it.[5] For each of the values ofnfrom 2 to 30, the following table shows the number (n− 1)! and the remainder when (n− 1)! is divided byn. (In the notation ofmodular arithmetic, the remainder whenmis divided bynis writtenmmodn.) The background color is blue forprimevalues ofn, gold forcompositevalues. As abiconditional(if and only if) statement, the proof has two halves: to show that equalitydoes nothold whenn{\displaystyle n}is composite, and to show that itdoeshold whenn{\displaystyle n}is prime. Suppose thatn{\displaystyle n}is composite. Therefore, it is divisible by some prime numberq{\displaystyle q}where2≤q<n{\displaystyle 2\leq q<n}. Becauseq{\displaystyle q}dividesn{\displaystyle n}, there is an integerk{\displaystyle k}such thatn=qk{\displaystyle n=qk}. Suppose for the sake of contradiction that(n−1)!{\displaystyle (n-1)!}were congruent to−1{\displaystyle -1}modulon{\displaystyle {n}}. Then(n−1)!{\displaystyle (n-1)!}would also be congruent to−1{\displaystyle -1}moduloq{\displaystyle {q}}: indeed, if(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}then(n−1)!=nm−1=(qk)m−1=q(km)−1{\displaystyle (n-1)!=nm-1=(qk)m-1=q(km)-1}for some integerm{\displaystyle m}, and consequently(n−1)!{\displaystyle (n-1)!}is one less than a multiple ofq{\displaystyle q}. On the other hand, since2≤q≤n−1{\displaystyle 2\leq q\leq n-1}, one of the factors in the expanded product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}isq{\displaystyle q}. Therefore(n−1)!≡0(modq){\displaystyle (n-1)!\equiv 0{\pmod {q}}}. This is a contradiction; therefore it is not possible that(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}whenn{\displaystyle n}is composite. In fact, more is true. With the sole exception of the casen=4{\displaystyle n=4}, where3!=6≡2(mod4){\displaystyle 3!=6\equiv 2{\pmod {4}}}, ifn{\displaystyle n}is composite then(n−1)!{\displaystyle (n-1)!}is congruent to 0 modulon{\displaystyle n}. The proof can be divided into two cases: First, ifn{\displaystyle n}can be factored as the product of two unequal numbers,n=ab{\displaystyle n=ab}, where2≤a<b<n{\displaystyle 2\leq a<b<n}, then botha{\displaystyle a}andb{\displaystyle b}will appear as factors in the product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}and so(n−1)!{\displaystyle (n-1)!}is divisible byab=n{\displaystyle ab=n}. Ifn{\displaystyle n}has no such factorization, then it must be the square of some primeq{\displaystyle q}larger than 2. But then2q<q2=n{\displaystyle 2q<q^{2}=n}, so bothq{\displaystyle q}and2q{\displaystyle 2q}will be factors of(n−1)!{\displaystyle (n-1)!}, and son{\displaystyle n}divides(n−1)!{\displaystyle (n-1)!}in this case, as well. The first two proofs below use the fact that theresidue classesmodulo a prime number form afinite field(specifically, aprime field).[6] The result is trivial whenp=2{\displaystyle p=2}, so assumep{\displaystyle p}is an odd prime,p≥3{\displaystyle p\geq 3}. Since the residue classes modulop{\displaystyle p}form a field, every non-zero residuea{\displaystyle a}has a unique multiplicative inversea−1{\displaystyle a^{-1}}.Euclid's lemmaimplies[a]that the only values ofa{\displaystyle a}for whicha≡a−1(modp){\displaystyle a\equiv a^{-1}{\pmod {p}}}area≡±1(modp){\displaystyle a\equiv \pm 1{\pmod {p}}}. Therefore, with the exception of±1{\displaystyle \pm 1}, the factors in the expanded form of(p−1)!{\displaystyle (p-1)!}can be arranged in disjoint pairs such that product of each pair is congruent to 1 modulop{\displaystyle p}. This proves Wilson's theorem. For example, forp=11{\displaystyle p=11}, one has10!=[(1⋅10)]⋅[(2⋅6)(3⋅4)(5⋅9)(7⋅8)]≡[−1]⋅[1⋅1⋅1⋅1]≡−1(mod11).{\displaystyle 10!=[(1\cdot 10)]\cdot [(2\cdot 6)(3\cdot 4)(5\cdot 9)(7\cdot 8)]\equiv [-1]\cdot [1\cdot 1\cdot 1\cdot 1]\equiv -1{\pmod {11}}.} Again, the result is trivial forp= 2, so supposepis an odd prime,p≥ 3. Consider the polynomial ghas degreep− 1, leading termxp− 1, and constant term(p− 1)!. Itsp− 1roots are 1, 2, ...,p− 1. Now consider halso has degreep− 1and leading termxp− 1. Modulop,Fermat's little theoremsays it also has the samep− 1roots, 1, 2, ...,p− 1. Finally, consider fhas degree at mostp− 2 (since the leading terms cancel), and modulopalso has thep− 1roots 1, 2, ...,p− 1. ButLagrange's theoremsays it cannot have more thanp− 2 roots. Therefore,fmust be identically zero (modp), so its constant term is(p− 1)! + 1 ≡ 0 (modp). This is Wilson's theorem. It is possible to deduce Wilson's theorem from a particular application of theSylow theorems. Letpbe a prime. It is immediate to deduce that thesymmetric groupSp{\displaystyle S_{p}}has exactly(p−1)!{\displaystyle (p-1)!}elements of orderp, namely thep-cyclesCp{\displaystyle C_{p}}. On the other hand, each Sylowp-subgroup inSp{\displaystyle S_{p}}is a copy ofCp{\displaystyle C_{p}}. Hence it follows that the number of Sylowp-subgroups isnp=(p−2)!{\displaystyle n_{p}=(p-2)!}. The third Sylow theorem implies Multiplying both sides by(p− 1)gives that is, the result. In practice, Wilson's theorem is useless as aprimality testbecause computing (n− 1)! modulonfor largeniscomputationally complex.[7] Using Wilson's Theorem, for any odd primep= 2m+ 1, we can rearrange the left hand side of1⋅2⋯(p−1)≡−1(modp){\displaystyle 1\cdot 2\cdots (p-1)\ \equiv \ -1\ {\pmod {p}}}to obtain the equality1⋅(p−1)⋅2⋅(p−2)⋯m⋅(p−m)≡1⋅(−1)⋅2⋅(−2)⋯m⋅(−m)≡−1(modp).{\displaystyle 1\cdot (p-1)\cdot 2\cdot (p-2)\cdots m\cdot (p-m)\ \equiv \ 1\cdot (-1)\cdot 2\cdot (-2)\cdots m\cdot (-m)\ \equiv \ -1{\pmod {p}}.}This becomes∏j=1mj2≡(−1)m+1(modp){\displaystyle \prod _{j=1}^{m}\ j^{2}\ \equiv (-1)^{m+1}{\pmod {p}}}or(m!)2≡(−1)m+1(modp).{\displaystyle (m!)^{2}\equiv (-1)^{m+1}{\pmod {p}}.}We can use this fact to prove part of a famous result: for any primepsuch thatp≡ 1 (mod 4), the number (−1) is a square (quadratic residue) modp. For this, supposep= 4k+ 1 for some integerk. Then we can takem= 2kabove, and we conclude that (m!)2is congruent to (−1) (modp). Wilson's theorem has been used to constructformulas for primes, but they are too slow to have practical value. Wilson's theorem allows one to define thep-adic gamma function. Gaussproved[8][9]that∏k=1gcd(k,m)=1mk≡{−1(modm)ifm=4,pα,2pα1(modm)otherwise{\displaystyle \prod _{k=1 \atop \gcd(k,m)=1}^{m}\!\!k\ \equiv {\begin{cases}-1{\pmod {m}}&{\text{if }}m=4,\;p^{\alpha },\;2p^{\alpha }\\\;\;\,1{\pmod {m}}&{\text{otherwise}}\end{cases}}}whereprepresents an odd prime andα{\displaystyle \alpha }a positive integer. That is, the product of the positive integers less thanmand relatively prime tomis one less than a multiple ofmwhenmis equal to 4, or a power of an odd prime, or twice a power of an odd prime; otherwise, the product is one more than a multiple ofm. The values ofmfor which the product is −1 are precisely the ones where there is aprimitive root modulom. Original: Inoltre egli intravide anche il teorema di Wilson, come risulta dall'enunciato seguente:"Productus continuorum usque ad numerum qui antepraecedit datum divisus per datum relinquit 1 (vel complementum ad unum?) si datus sit primitivus. Si datus sit derivativus relinquet numerum qui cum dato habeat communem mensuram unitate majorem."Egli non giunse pero a dimostrarlo. Translation: In addition, he [Leibniz] also glimpsed Wilson's theorem, as shown in the following statement:"The product of all integers preceding the given integer, when divided by the given integer, leaves 1 (or the complement of 1?) if the given integer be prime. If the given integer be composite, it leaves a number which has a common factor with the given integer [which is] greater than one."However, he didn't succeed in proving it. TheDisquisitiones Arithmeticaehas been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Wilson%27s_theorem
Dynamic functional connectivity(DFC) refers to the observed phenomenon thatfunctional connectivitychanges over a short time. Dynamic functional connectivity is a recent expansion on traditional functional connectivity analysis which typically assumes that functional networks are static in time. DFC is related to a variety of different neurological disorders, and has been suggested to be a more accurate representation of functional brain networks. The primary tool for analyzing DFC isfMRI, but DFC has also been observed with several other mediums. DFC is a recent development within the field offunctional neuroimagingwhose discovery was motivated by the observation of temporal variability in the rising field of steady state connectivity research. Functional connectivity refers to the functionally integrated relationship between spatially separated brain regions. Unlikestructural connectivitywhich looks for physical connections in the brain, functional connectivity is related to similar patterns of activation in different brain regions regardless of the apparent physical connectedness of the regions.[1]This type of connectivity was discovered in the mid-1990s and has been seen primarily usingfMRIandPositron emission tomography.[2]Functional connectivity is usually measured duringresting state fMRIand is typically analyzed in terms of correlation,coherence, and spatial grouping based on temporal similarities.[3]These methods have been used to show that functional connectivity is related to behavior in a variety of different tasks, and that it has a neural basis. These methods assume the functional connections in the brain remain constant in a short time over a task or period of data collection. Studies that showed brain state dependent changes in functional connectivity were the first indicators that temporal variation in functional connectivity may be significant. Several studies in the mid-2000s examined the changes in FC that were related to a variety of different causes such as mental tasks,[4]sleep,[5]and learning.[6]These changes often occur within the same individual and are clearly relevant to behavior. DFC has now been investigated in a variety of different contexts with many analysis tools. It has been shown to be related to both behavior and neural activity. Some researchers believe that it may be heavily related to high level thought or consciousness.[3] Because DFC is such a new field, much of the research related to it is conducted to validate the relevance of these dynamic changes rather than explore their implications; however, many critical findings have been made that help the scientific community better understand the brain. Analysis of dynamic functional connectivity has shown that far from being completely static, the functional networks of the brain fluctuate on the scale of seconds to minutes. These changes are generally seen as movements from one short term state to another, rather than continuous shifts.[3]Many studies have shown reproducible patterns of network activity that move throughout the brain. These patterns have been seen in both animals and humans, and are present at only certain points during a scanner session.[7]In addition to showing transient brain states, DFC analysis has shown a distinct hierarchical organization of the networks of the brain. Connectivity between bilaterally symmetric regions is the most stable form of connectivity in the brain, followed by other regions with direct anatomical connections. Steady state functional connectivity networks exist and havephysiological relevance, but have less temporal stability than the anatomical networks. Finally, some functional networks are fleeting enough to only be seen with DFC analysis. These networks also possess physiological relevance but are much less temporally stable than the other networks in the brain.[8] Sliding window analysis is the most common method used in the analysis of functional connectivity, first introduced by Sakoglu and Calhoun in 2009, and applied to schizophrenia.[9][10][11][12]Sliding window analysis is performed by conducting analysis on a set number of scans in an fMRI session. The number of scans is the length of the sliding window. The defined window is then moved a certain number of scans forward in time and additional analysis is performed. The movement of the window is usually referenced in terms of the degree of overlap between adjacent windows. One of the principle benefits of sliding window analysis is that almost any steady state analysis can also be performed using sliding window if the window length is sufficiently large. Sliding window analysis also has a benefit of being easy to understand and in some ways easier to interpret.[3]As the most common method of analysis, sliding window analysis has been used in many different ways to investigate a variety of different characteristics and implications of DFC. In order to be accurately interpreted, data from sliding window analysis generally must be compared between two different groups. Researchers have used this type of analysis to show different DFC characteristics in diseased and healthy patients, high and low performers on cognitive tasks, and between large scale brain states. One of the first methods ever used to analyze DFC was pattern analysis of fMRI images to show that there are patterns of activation in spatially separated brain regions that tend to have synchronous activity. It has become clear that there is a spatial and temporal periodicity in the brain that probably reflects some of the constant processes of the brain. Repeating patterns of network information have been suggested to account for 25–50% of the variance in fMRI BOLD data.[7][13]These patterns of activity have primarily been seen in rats as a propagating wave of synchronized activity along the cortex. These waves have also been shown to be related to underlying neural activity, and has been shown to be present in humans as well as rats.[7] Departing from the traditional approaches, recently an efficient method was introduced to analyze rapidly changing functional activations patterns which transforms the fMRI BOLD data into a point process.[14][15]This is achieved by selecting for each voxel the points of inflection of the BOLD signal (i.e., the peaks). These few points contain a great portion of the information pertaining functional connectivity, because it has been demonstrated, that despite the tremendous reduction on the data size (> 95%), it compares very well with inferences of functional connectivity[16][17]obtained with standard methods which uses the full signal. The large information content of these few points is consistent with the results of Petridou et al.[18]who demonstrated he contribution of these "spontaneous events" to the correlation strength and power spectra of the slow spontaneous fluctuations by deconvolving the task hemodynamic response function from the rest data. Subsequently, similar principles were successfully applied under the name of co-activation patterns (CAP).[19][20][21] Time-frequency analysishas been proposed as an analysis method that is capable of overcoming many of the challenges associated with sliding windows. Unlike sliding window analysis, time frequency analysis allows the researcher to investigate both frequency and amplitude information simultaneously. Thewavelet transformhas been used to conduct DFC analysis that has validated the existence of DFC by showing its significant changes in time. This same method has recently been used to investigate some of the dynamic characteristics of accepted networks. For example, time frequency analysis has shown that the anticorrelation between thedefault mode networkand thetask-positive networkis not constant in time but rather is a temporary state.[22]Independent component analysishas become one of the most common methods of network generation in steady state functional connectivity. ICA divides fMRI signal into several spatial components that have similar temporal patterns. More recently, ICA has been used to divide fMRI data into different temporal components. This has been termed temporal ICA and it has been used to plot network behavior that accounts for 25% of variability in the correlation of anatomical nodes in fMRI.[23] Several researchers have argued that DFC may be a simple reflection of analysis, scanner, or physiological noise. Noise in fMRI can arise from a variety of different factors including heart beat, changes in the blood brain barrier, characteristics of the acquiring scanner, or unintended effects of analysis. Some researchers have proposed that the variability in functional connectivity in fMRI studies is consistent with the variability that one would expect from simply analyzing random data. This complaint that DFC may reflect only noise has been recently lessened by the observation of electrical basis to fMRI DFC data and behavioral relevance of DFC characteristics.[3] In addition to complaints that DFC may be a product of scanner noise, observed DFC could be criticized based on the indirect nature of fMRI which is used to observe it. fMRI data is collected by quickly acquiring a sequence of MRI images in time using echo planar imaging. The contrast in these images is heavily influenced by the ratio of oxygenated and deoxygenated blood. Since active neurons require more energy than resting neurons, changes in this contrast is traditionally interpreted an indirect measure of neural activity. Because of its indirect nature, fMRI data in DFC studies could be criticized as potentially being a reflection of non neural information. This concern has been alleviated recently by the observed correlation between fMRI DFC and simultaneously acquired electrophysiology data.[24]Battaglia and colleagues have tried to address those controversies, linking dynamic functional connectivity to causality or effective connectivity. The scientists claim indeed that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity.[25] fMRI is the primary means of investigating DFC. This presents unique challenges because fMRI has fairly low temporal resolution, typically 0.5 Hz, and is only an indirect measure of neural activity. The indirect nature of fMRI analysis suggests that validation is needed to show that findings from fMRI are actually relevant and reflective of neural activity. Correlation between DFC and electrophysiology has led some scientists to suggest that DFC could reflect hemodynamic results of dynamic network behavior that has been seen in single cell analysis of neuron populations. Although hemodynamic response is too slow to reflect a one-to-one correspondence with neural network dynamics, it is plausible that DFC is a reflection of the power of some frequencies of electrophysiology data.[3] Electroencephalography(EEG) has also been used in humans to both validate and interpret observations made in DFC. EEG has poor spatial resolution because it is only able to acquire data on the surface of the scalp, but it is reflective of broad electrical activity from many neurons. EEG has been used simultaneously with fMRI to account for some of the inter scan variance in FC. EEG has also been used to show that changes in FC are related to broad brain states observed in EEG.[26][27][28][29] Magnetoencephalography(MEG) can be used to measure the magnetic fields produced by electrical activity in the brain. MEG has high temporal resolution and has generally higher spatial resolution than EEG. Resting state studies with MEG are still limited by spatial resolution, but the modality has been used to show that resting state networks move through periods of low and high levels of correlation. This observation is consistent with the results seen in other DFC studies such as DFC activation pattern analysis.[3] Single-unit recording were used in order to explore the extent, strength and plasticity of functional connectivity between individual cortical neurons in cats and monkeys. Such studies revealed correlated activity at various time scales. At the fastest time scale, that of 1 – 20 ms, correlation coefficients were typically < 0.05.[30][31]These functional connections were found to be plastic – changing the correlation for a conditioning period of Ts (typically a few minutes), by means of spike-triggered sensory stimulations, induced short-term (typically < Ts) lasting changes of the connections. The pre-post conditioning strengthening of a functional connection was typically equal to the square root of its pre-during conditioning strengthening.[32] Dynamic Functional Connectivity studied using fMRI may be related to a phenomenon previously discovered in macaque prefrontal cortex termed Dynamic Network Connectivity, whereby arousal mechanisms rapidly alter the strength of glutamate synaptic connections onto dendritic spines by opening or closing potassium channels on spines, thus weakening or strengthening connectivity, respectively.[33][34]For example, dopamine D1 receptor and/or noradrenergic beta-1 receptor stimulation on spines can increase cAMP-PKA-calcium signaling to open HCN, KCNQ2, and/or SK channels to rapidly weaken a connection, e.g. as occurs during stress.[35] DFC has been shown to be significantly related to human performance, including vigilance and aspects of attention. It has been proposed and supported that the network behavior immediately prior to a task onset is a strong predictor of performance on that task. Traditionally, fMRI studies have focused on the magnitude of activation in brain regions as a predictor of performance, but recent research has shown that correlation between networks as measured with sliding window analysis is an even stronger predictor of performance.[24]Individual differences in functional connectivity variability (FCV) across sliding windows within fMRI scans have been shown to correlate with the tendency to attend to pain.[36]The degree to which a subject is mind wandering away from a sensory stimulus has also been related to FCV.[37] One of the principal motivations of DFC analysis is to better understand, detect and treat neurological diseases. Static functional connectivity has been shown to be significantly related to a variety of diseases such asdepression,schizophrenia, andAlzheimer's disease. Because of the newness of the field, DFC has only recently been used to investigate disease states, but since 2012 each of these three diseases has been shown to be correlated to dynamic temporal characteristics in functional connectivity. Most of these differences are related to the amount of time that is spent in different transient states. Patients with Schizophrenia have less frequent state changes than healthy patients, and this result has led to the suggestion that the disease is related to patients being stuck in certain brain states where the brain is unable to respond quickly to different queues.[38]Also, a study in the visual sensory network showed that schizophrenia subjects spent more time than the healthy subjects in a state in which the connectivity between the middle temporal gyrus and other regions of the visual sensory network is highly negative.[39]Studies with Alzheimer's disease have shown that patients with this ailment have altered network connectivity as well as altered time spent in the networks that are present.[40]The observed correlation between DFC and disease does not imply that the changes in DFC are the cause of any of these diseases, but information from DFC analysis may be used to better understand the effects of the disease and to more quickly and accurately diagnose them.
https://en.wikipedia.org/wiki/Dynamic_functional_connectivity
Incartography, atrap streetis afictitious entryin the form of a misrepresented street on a map, often outside the area the map nominally covers, for the purpose of "trapping" potential plagiarists of the map who, if caught, would be unable to explain the inclusion of the "trap street" on their map as innocent. On maps that are not of streets, other "trap" features (such asnonexistent towns, or mountains with the wrong elevations) may be inserted or altered for the same purpose.[1] Trap streets are often nonexistent streets, but sometimes, rather than actually depicting a street where none exists, a map will misrepresent the nature of a street in a fashion that can still be used to detectcopyright violatorsbut is less likely to interfere with navigation. For instance, a map might add nonexistent bends to a street, or depict a major street as a narrow lane, without changing its location or its connections to other streets, or the trap street might be placed in an obscure location of a map that is unlikely to be referenced. Trap streets are rarely acknowledged by publishers. One exception is a popular driver's atlas for the city ofAthens,Greece, which has a warning inside its front cover that potential copyright violators should beware of trap streets.[2] Trap streets are not copyrightable under the federal law of theUnited States. InNester's Map & Guide Corp. v. Hagstrom Map Co.(1992),[3][4]aUnited Statesfederal court found that copyright traps are not themselves protectable bycopyright. There, the court stated: "[t]o treat 'false' facts interspersed among actual facts and represented as actual facts as fiction would mean that no one could ever reproduce or copy actual facts without risk of reproducing a false fact and thereby violating a copyright ... If such were the law, information could never be reproduced or widely disseminated." (Id. at 733) In a 2001 case,The Automobile Associationin theUnited Kingdomagreed to settle a case for £20,000,000 when it was caught copyingOrdnance Surveymaps. In this case, the identifying "fingerprints" were not deliberate errors but rather stylistic features such as the width of roads.[5] In another case, theSingapore Land AuthoritysuedVirtual Map, an online publisher of maps, for infringing on its copyright. The Singapore Land Authority stated in its case that there were deliberate errors in maps they had provided to Virtual Map years earlier. Virtual Map denied this and insisted that it had done its owncartography.[6] The 1979 science fiction novelThe Ultimate EnemybyFred Saberhagenincludes the short story "The Annihilation of Angkor Apeiron" in which a salesman allows a draft of a newEncyclopedia Galacticato be captured by alien war machines. It leads them to believe there is a nearby planet ripe for attack, but the planet is actually a copyright trap and the aliens are led away from inhabited worlds, saving millions of lives. The 2010 novelKrakenbyChina Miévillefeatures the trap streets of theLondon A-Zbeing places where the magical denizens of the city can exist without risk of being disturbed by normal folk. A 2013 film,Trap Street, inverts the usual meaning of atrap street, becoming a real street which is deliberately obscured or removed from a map—and anyone who attempts to identify it by placing it on public record is then "trapped".[7] The 2015Doctor Whoepisode "Face the Raven" features a hidden street where alien asylum seekers have taken shelter. Due to a psychic field that subconsciously makes observers ignore it, outsiders consider it a trap street when they see it on maps. One scene involves the characterClara Oswalddiscussing the definition of "trap street". The episode's working title was also "Trap Street".
https://en.wikipedia.org/wiki/Trap_street
In the field ofartificial intelligence, the designationneuro-fuzzyrefers to combinations ofartificial neural networksandfuzzy logic. Neuro-fuzzy hybridization results in ahybrid intelligent systemthat combines the human-like reasoning style of fuzzy systems with the learning andconnectioniststructure of neural networks. Neuro-fuzzy hybridization is widely termed as fuzzy neural network (FNN) or neuro-fuzzy system (NFS) in the literature. Neuro-fuzzy system (the more popular term is used henceforth) incorporates the human-like reasoning style of fuzzy systems through the use offuzzy setsand a linguistic model consisting of a set of IF-THEN fuzzy rules. The main strength of neuro-fuzzy systems is that they areuniversal approximatorswith the ability to solicit interpretable IF-THEN rules. The strength of neuro-fuzzy systems involves two contradictory requirements in fuzzy modeling: interpretability versus accuracy. In practice, one of the two properties prevails. The neuro-fuzzy in fuzzy modeling research field is divided into two areas: linguistic fuzzy modeling that is focused on interpretability, mainly the Mamdani model; and precise fuzzy modeling that is focused on accuracy, mainly the Takagi-Sugeno-Kang (TSK) model. Although generally assumed to be the realization of afuzzy systemthroughconnectionistnetworks, this term is also used to describe some other configurations including: It must be pointed out that interpretability of the Mamdani-type neuro-fuzzy systems can be lost. To improve the interpretability of neuro-fuzzy systems, certain measures must be taken, wherein important aspects of interpretability of neuro-fuzzy systems are also discussed.[2] A recent research line addresses thedata stream miningcase, where neuro-fuzzy systems are sequentially updated with new incoming samples on demand and on-the-fly. Thereby, system updates not only include a recursive adaptation of model parameters, but also a dynamic evolution and pruning of model components (neurons, rules), in order to handleconcept driftand dynamically changing system behavior adequately and to keep the systems/models "up-to-date" anytime. Comprehensive surveys of various evolving neuro-fuzzy systems approaches can be found in[3]and.[4] Pseudo outer product-based fuzzy neural networks(POPFNN) are a family of neuro-fuzzy systems that are based on the linguistic fuzzy model.[5] Three members of POPFNN exist in the literature: The "POPFNN" architecture is a five-layerneural networkwhere the layers from 1 to 5 are called: input linguistic layer, condition layer, rule layer, consequent layer, output linguistic layer. The fuzzification of the inputs and the defuzzification of the outputs are respectively performed by the input linguistic and output linguistic layers while the fuzzy inference is collectively performed by the rule, condition and consequence layers. The learning process of POPFNN consists of three phases: Various fuzzy membership generationalgorithmscan be used: Learning Vector Quantization (LVQ), Fuzzy Kohonen Partitioning (FKP) or Discrete Incremental Clustering (DIC). Generally, the POP algorithm and its variant LazyPOP are used to identify the fuzzy rules.
https://en.wikipedia.org/wiki/Neuro-fuzzy
Instatistics,completenessis a property of astatisticcomputed on asample datasetin relation to a parametric model of the dataset. It is opposed to the concept of anancillary statistic. While an ancillary statistic contains no information about the model parameters, a complete statistic contains only information about the parameters, and no ancillary information. It is closely related to the concept of asufficient statisticwhich contains all of the information that the dataset provides about the parameters.[1] Consider arandom variableXwhose probability distribution belongs to aparametric modelPθparametrized byθ. SayTis astatistic; that is, the composition of ameasurable functionwith a random sampleX1,...,Xn. The statisticTis said to becompletefor the distribution ofXif, for every measurable functiong,[1] The statisticTis said to beboundedly completefor the distribution ofXif this implication holds for every measurable functiongthat is also bounded. The Bernoulli model admits a complete statistic.[1]LetXbe arandom sampleof sizensuch that eachXihas the sameBernoulli distributionwith parameterp. LetTbe the number of 1s observed in the sample, i.e.T=∑i=1nXi{\displaystyle \textstyle T=\sum _{i=1}^{n}X_{i}}.Tis a statistic ofXwhich has abinomial distributionwith parameters (n,p). If the parameter space forpis (0,1), thenTis a complete statistic. To see this, note that Observe also that neitherpnor 1 −pcan be 0. HenceEp(g(T))=0{\displaystyle E_{p}(g(T))=0}if and only if: On denotingp/(1 −p) byr, one gets: First, observe that the range ofris thepositive reals. Also, E(g(T)) is apolynomialinrand, therefore, can only be identical to 0 if all coefficients are 0, that is,g(t) = 0 for allt. It is important to notice that the result that all coefficients must be 0 was obtained because of the range ofr. Had the parameter space been finite and with a number of elements less than or equal ton, it might be possible to solve the linear equations ing(t) obtained by substituting the values ofrand get solutions different from 0. For example, ifn= 1 and the parameter space is {0.5}, a single observation and a single parameter value,Tis not complete. Observe that, with the definition: then, E(g(T)) = 0 althoughg(t) is not 0 fort= 0 nor fort= 1. This example will show that, in a sampleX1,X2of size 2 from anormal distributionwith known variance, the statisticX1+X2is complete and sufficient. SupposeX1,X2areindependent, identically distributed random variables,normally distributedwith expectationθand variance 1. The sum is acomplete statisticforθ. To show this, it is sufficient to demonstrate that there is no non-zero functiong{\displaystyle g}such that the expectation of remains zero regardless of the value ofθ. That fact may be seen as follows. The probability distribution ofX1+X2is normal with expectation 2θand variance 2. Its probability density function inx{\displaystyle x}is therefore proportional to The expectation ofgabove would therefore be a constant times A bit of algebra reduces this to wherek(θ) is nowhere zero and As a function ofθthis is a two-sidedLaplace transformofh, and cannot be identically zero unlesshis zero almost everywhere.[2]The exponential is not zero, so this can only happen ifgis zero almost everywhere. By contrast, the statistic(X1,X2){\textstyle (X_{1},X_{2})}is sufficient but not complete. It admits a non-zero unbiased estimator of zero, namelyX1−X2{\textstyle X_{1}-X_{2}}. Most parametric models have asufficient statisticwhich is not complete. This is important because theLehmann–Scheffé theoremcannot be applied to such models. Galili and Meilijson 2016[3]propose the following didactic example. Considern{\displaystyle n}independent samples from the uniform distribution: k{\displaystyle k}is a known design parameter. This model is ascale family(a specific case ofa location-scale family) model: scaling the samples by a multiplierc{\displaystyle c}multiplies the parameterθ{\displaystyle \theta }. Galili and Meilijson show that the minimum and maximum of the samples are together a sufficient statistic:X(1),X(n){\displaystyle X_{(1)},X_{(n)}}(using the usual notation fororder statistics). Indeed, conditional on these two values, the distribution of the rest of the sample is simply uniform on the range they define:[X(1),X(n)]{\displaystyle \left[X_{(1)},X_{(n)}\right]}. However, their ratio has a distribution which does not depend onθ{\displaystyle \theta }. This follows from the fact that this is a scale family: any change of scale impacts both variables identically. Subtracting the meanm{\displaystyle m}from that distribution, we obtain: We have thus shown that there exists a functiong(X(1),X(n)){\displaystyle g\left(X_{(1)},X_{(n)}\right)}which is not0{\displaystyle 0}everywhere but which has expectation0{\displaystyle 0}. The pair is thus not complete. The notion of completeness has many applications in statistics, particularly in the following theorems of mathematical statistics. Completenessoccurs in theLehmann–Scheffé theorem,[1]which states that if a statistic that is unbiased,completeandsufficientfor some parameterθ, then it is the best mean-unbiased estimator forθ. In other words, this statistic has a smaller expected loss for anyconvexloss function; in many practical applications with the squared loss-function, it has a smaller mean squared error among any estimators with the sameexpected value. Examples exists that when the minimal sufficient statistic isnot completethen several alternative statistics exist for unbiased estimation ofθ, while some of them have lower variance than others.[3] See alsominimum-variance unbiased estimator. Bounded completenessoccurs inBasu's theorem,[1]which states that a statistic that is bothboundedly completeandsufficientisindependentof anyancillary statistic. Bounded completenessalso occurs in Bahadur's theorem. In the case where there exists at least oneminimal sufficientstatistic, a statistic which issufficientand boundedly complete, is necessarily minimal sufficient.[4]
https://en.wikipedia.org/wiki/Completeness_(statistics)
ARINC 429,[1]the "Mark 33 Digital Information Transfer System (DITS)," is theARINCtechnical standard for the predominantavionicsdata busused on most higher-end commercial and transport aircraft.[2]It defines the physical and electrical interfaces of a two-wiredata busand a data protocol to support an aircraft's avionicslocal area network. ARINC429 is a data transfer standard for aircraft avionics. It uses a self-clocking, self-synchronizing data bus protocol (Tx and Rx are on separate ports). The physical connection wires aretwisted pairscarryingbalanced differential signaling.Data wordsare 32 bits in length and most messages consist of a single data word.Messagesare transmitted at either 12.5 or 100 kbit/s[3]to other system elements that are monitoring the bus messages. The transmitter constantly transmits either 32-bit data words or the NULL state (0 Volts). A single wire pair is limited to one transmitter and no more than 20 receivers. The protocol allows for self-clocking at the receiver end, thus eliminating the need to transmit clocking data. ARINC 429 is an alternative toMIL-STD-1553. The ARINC 429 unit of transmission is a fixed-length 32-bitframe, which the standard refers to as a 'word'. The bits within an ARINC 429 word are serially identified from Bit Number 1 to Bit Number 32[4]or simply Bit 1 to Bit 32. The fields and data structures of the ARINC 429 word are defined in terms of this numbering. While it is common to illustrate serial protocol frames progressing in time from right to left, a reversed ordering is commonly practiced within the ARINC standard. Even though ARINC 429 word transmission begins with Bit 1 and ends with Bit 32, it is common to diagram[5]and describe[6][7]ARINC 429 words in the order from Bit 32 to Bit 1. In simplest terms, while the transmission order of bits (from the first transmitted bit to the last transmitted bit) for a 32-bit frame is conventionally diagrammed as this sequence is often diagrammed in ARINC 429 publications in the opposite direction as Generally, when the ARINC 429 word format is illustrated with Bit 32 to the left, the numeric representations in the data field are read with themost significant biton the left. However, in this particular bit order presentation, theLabelfield reads with its most significant bit on the right. LikeCAN ProtocolIdentifier Fields,[8]ARINC 429label fieldsare transmitted most significant bit first. However, likeUART Protocol,Binary-coded decimalnumbers andbinarynumbers in the ARINC 429data fieldsare generally transmitted least significant bit first. Some equipment suppliers[9][10]publish the bit transmission order as The suppliers that use this representation have in effect renumbered the bits in the Label field, converting the standard'sMSB 1 bit numberingfor that field to LSB 1 bit numbering. This renumbering highlights the relative reversal of"bit endianness"between the Label representation and numeric data representations as defined within the ARINC 429 standard. Of note is how the87654321bit numbering is similar to the76543210bit numberingcommon in digital equipment; but reversed from the12345678bit numbering defined for the ARINC 429 Label field. This notional reversal also reflects historical implementation details. ARINC 429transceivershave been implemented with 32-bitshift registers.[11]Parallel access to that shift register is oftenoctet-oriented. As such, the bit order of the octet access is the bit order of the accessing device, which is usuallyLSB 0; and serial transmission is arranged such that the least significant bit of each octet is transmitted first. So, in common practice, the accessing device wrote or read a "reversed label"[12](for example, to transmit a Label 2138[or 8B16] the bit-reversed value D116is written to the Label octet). Newer or "enhanced" transceivers may be configured to reverse the Label field bit order "in hardware."[13] Each ARINC 429 word is a 32-bit sequence that contains five fields: The image below exemplifies many of the concepts explained in the adjacent sections. In this image the Label (260) appears in red, the Data in blue-green and the Parity bit in navy blue. Label guidelines are provided as part of the ARINC 429 specification, for various equipment types. Each aircraft will contain a number of different systems, such asflight management computers,inertial reference systems,air data computers,radar altimeters,radios, andGPSsensors. For each type of equipment, a set of standard parameters is defined, which is common across all manufacturers and models. For example, any air data computer will provide the barometric altitude of the aircraft as label 203. This allows some degree of interchangeability of parts, as all air data computers behave, for the most part, in the same way. There are only a limited number of labels, though, and so label 203 may have some completely different meaning if sent by a GPS sensor, for example. Very commonly needed aircraft parameters, however, use the same label regardless of source. Also, as with any specification, each manufacturer has slight differences from the formal specification, such as by providing extra data above and beyond the specification, leaving out some data recommended by the specification, or other various changes. Avionics systems must meet environmental requirements, usually stated as RTCA DO-160 environmental categories. ARINC 429 employs several physical, electrical, and protocol techniques to minimizeelectromagnetic interferencewith on-board radios and other equipment, for example viaother transmission cables. Its cabling is a shielded 78Ωtwisted-pair.[1]ARINC signaling defines a 10 Vp differential between the Data A and Data B levels within the bipolar transmission (i.e. 5 V on Data A and -5 V on Data B would constitute a valid driving signal), and the specification defines acceptable voltage rise and fall times. ARINC 429's data encoding uses a complementary differential bipolarreturn-to-zero(BPRZ) transmission waveform, further reducing EMI emissions from the cable itself. When developing and/or troubleshooting the ARINC 429 bus, examination of hardware signals can be very important to find problems. Aprotocol analyzeris useful to collect, analyze, decode and store signals.
https://en.wikipedia.org/wiki/ARINC_429
ISO 5776, published by theInternational Organization for Standardization(ISO), is an international standard that specifies symbols forproofreadingsuch as of manuscripts,typescriptsandprinter's proofs.[1]The total number of symbols specified is 16, each in English, French and Russian. The standard is partially derived from theBritish StandardBS-5261,[2]but is closer to German standards DIN 16511 and 16549-1. All of these standards date from the time beforedesktop publishing. A first edition of the standard was published in 1983.[3] A second edition of the standard was published in 2016 which cancels and replaces the first edition from 1983.[4] The third revised edition was published in 2022 and replaced the second edition from 2016.[5] Thistypography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/ISO_5776
TheCatalan numbersare asequenceofnatural numbersthat occur in variouscounting problems, often involvingrecursivelydefined objects. They are named afterEugène Catalan, though they were previously discovered in the 1730s byMinggatu. Then-th Catalan number can be expressed directly in terms of thecentral binomial coefficientsby The first Catalan numbers forn= 0, 1, 2, 3, ...are An alternative expression forCnis which is equivalent to the expression given above because(2nn+1)=nn+1(2nn){\displaystyle {\tbinom {2n}{n+1}}={\tfrac {n}{n+1}}{\tbinom {2n}{n}}}. This expression shows thatCnis aninteger, which is not immediately obvious from the first formula given. This expression forms the basis for aproof of the correctness of the formula. Another alternative expression is which can be directly interpreted in terms of thecycle lemma; see below. The Catalan numbers satisfy therecurrence relations and Asymptotically, the Catalan numbers grow asCn∼4nn3/2π,{\displaystyle C_{n}\sim {\frac {4^{n}}{n^{3/2}{\sqrt {\pi }}}}\,,}in the sense that the quotient of then-th Catalan number and the expression on the right tends towards 1 asnapproaches infinity. This can be proved by using theasymptotic growth of the central binomial coefficients, byStirling's approximationforn!{\displaystyle n!}, orvia generating functions. The only Catalan numbersCnthat are odd are those for whichn= 2k− 1; all others are even. The only prime Catalan numbers areC2= 2andC3= 5.[1]More generally, the multiplicity with which a primepdividesCncan be determined by first expressingn+ 1in basep. Forp= 2, the multiplicity is the number of 1 bits, minus 1. Forpan odd prime, count all digits greater than(p+ 1) / 2; also count digits equal to(p+ 1) / 2unless final; and count digits equal to(p− 1) / 2if not final and the next digit is counted.[2]The only known odd Catalan numbers that do not have last digit 5 areC0= 1,C1= 1,C7= 429,C31,C127andC255. The odd Catalan numbers,Cnforn= 2k− 1, do not have last digit 5 ifn+ 1has a base 5 representation containing 0, 1 and 2 only, except in the least significant place, which could also be a 3.[3] The Catalan numbers have the integral representations[4][5] which immediately yields∑n=0∞Cn4n=2{\displaystyle \sum _{n=0}^{\infty }{\frac {C_{n}}{4^{n}}}=2}. This has a simple probabilistic interpretation. Consider a random walk on the integer line, starting at 0. Let -1 be a "trap" state, such that if the walker arrives at -1, it will remain there. The walker can arrive at the trap state at times 1, 3, 5, 7..., and the number of ways the walker can arrive at the trap state at time2k+1{\displaystyle 2k+1}isCk{\displaystyle C_{k}}. Since the 1D random walk is recurrent, the probability that the walker eventually arrives at -1 is∑n=0∞Cn22n+1=1{\displaystyle \sum _{n=0}^{\infty }{\frac {C_{n}}{2^{2n+1}}}=1}. There are many counting problems incombinatoricswhose solution is given by the Catalan numbers. The bookEnumerative Combinatorics: Volume 2by combinatorialistRichard P. Stanleycontains a set of exercises which describe 66 different interpretations of the Catalan numbers. Following are some examples, with illustrations of the casesC3= 5andC4= 14. The following diagrams show the casen= 4: This can be represented by listing the Catalan elements by column height:[8] There are several ways of explaining why the formula solves the combinatorial problems listed above. The first proof below uses agenerating function. The other proofs are examples ofbijective proofs; they involve literally counting a collection of some kind of object to arrive at the correct formula. We first observe that all of the combinatorial problems listed above satisfySegner's[9]recurrence relation For example, every Dyck wordwof length ≥ 2 can be written in a unique way in the form with (possibly empty) Dyck wordsw1andw2. Thegenerating functionfor the Catalan numbers is defined by The recurrence relation given above can then be summarized in generating function form by the relation in other words, this equation follows from the recurrence relation by expanding both sides intopower series. On the one hand, the recurrence relation uniquely determines the Catalan numbers; on the other hand, interpretingxc2−c+ 1 = 0as aquadratic equationofcand using thequadratic formula, the generating function relation can be algebraically solved to yield two solution possibilities From the two possibilities, the second must be chosen because only the second gives The square root term can be expanded as a power series using thebinomial series 1−1−4x=−∑n=1∞(1/2n)(−4x)n=−∑n=1∞(−1)n−1(2n−3)!!2nn!(−4x)n=−∑n=0∞(−1)n(2n−1)!!2n+1(n+1)!(−4x)n+1=∑n=0∞2n+1(2n−1)!!(n+1)!xn+1=∑n=0∞2(2n)!(n+1)!n!xn+1=∑n=0∞2n+1(2nn)xn+1.{\displaystyle {\begin{aligned}1-{\sqrt {1-4x}}&=-\sum _{n=1}^{\infty }{\binom {1/2}{n}}(-4x)^{n}=-\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}(2n-3)!!}{2^{n}n!}}(-4x)^{n}\\&=-\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n-1)!!}{2^{n+1}(n+1)!}}(-4x)^{n+1}=\sum _{n=0}^{\infty }{\frac {2^{n+1}(2n-1)!!}{(n+1)!}}x^{n+1}\\&=\sum _{n=0}^{\infty }{\frac {2(2n)!}{(n+1)!n!}}x^{n+1}=\sum _{n=0}^{\infty }{\frac {2}{n+1}}{\binom {2n}{n}}x^{n+1}\,.\end{aligned}}}Thus,c(x)=1−1−4x2x=∑n=0∞1n+1(2nn)xn.{\displaystyle c(x)={\frac {1-{\sqrt {1-4x}}}{2x}}=\sum _{n=0}^{\infty }{\frac {1}{n+1}}{\binom {2n}{n}}x^{n}\,.} We count the number of paths which start and end on the diagonal of ann×ngrid. All such paths havenright andnup steps. Since we can choose which of the2nsteps are up or right, there are in total(2nn){\displaystyle {\tbinom {2n}{n}}}monotonic paths of this type. Abadpath crosses the main diagonal and touches the next higher diagonal (red in the illustration). The part of the path after the higher diagonal is then flipped about that diagonal, as illustrated with the red dotted line. This swaps all the right steps to up steps and vice versa. In the section of the path that is not reflected, there is one more up step than right steps, so therefore the remaining section of the bad path has one more right step than up steps. When this portion of the path is reflected, it will have one more up step than right steps. Since there are still2nsteps, there are nown+ 1up steps andn− 1right steps. So, instead of reaching(n,n), all bad paths after reflection end at(n− 1,n+ 1). Because every monotonic path in the(n− 1) × (n+ 1)grid meets the higher diagonal, and because the reflection process is reversible, the reflection is therefore a bijection between bad paths in the original grid and monotonic paths in the new grid. The number of bad paths is therefore: and the number of Catalan paths (i.e. good paths) is obtained by removing the number of bad paths from the total number of monotonic paths of the original grid, In terms of Dyck words, we start with a (non-Dyck) sequence ofnX's andnY's and interchange all X's and Y's after the first Y that violates the Dyck condition. After this Y, note that there is exactly one more Y than there are Xs. This bijective proof provides a natural explanation for the termn+ 1appearing in the denominator of the formula forCn. A generalized version of this proof can be found in a paper of Rukavicka Josef (2011).[10] Given a monotonic path, theexceedanceof the path is defined to be the number ofverticaledges above the diagonal. For example, in Figure 2, the edges above the diagonal are marked in red, so the exceedance of this path is 5. Given a monotonic path whose exceedance is not zero, we apply the following algorithm to construct a new path whose exceedance is1less than the one we started with. In Figure 3, the black dot indicates the point where the path first crosses the diagonal. The black edge isX, and we place the last lattice point of the red portion in the top-right corner, and the first lattice point of the green portion in the bottom-left corner, and place X accordingly, to make a new path, shown in the second diagram. The exceedance has dropped from3to2. In fact, the algorithm causes the exceedance to decrease by1for any path that we feed it, because the first vertical step starting on the diagonal (at the point marked with a black dot) is the only vertical edge that changes from being above the diagonal to being below it when we apply the algorithm - all the other vertical edges stay on the same side of the diagonal. It can be seen that this process isreversible: given any pathPwhose exceedance is less thann, there is exactly one path which yieldsPwhen the algorithm is applied to it. Indeed, the (black) edgeX, which originally was the first horizontal step ending on the diagonal, has become thelasthorizontal stepstartingon the diagonal. Alternatively, reverse the original algorithm to look for the first edge that passesbelowthe diagonal. This implies that the number of paths of exceedancenis equal to the number of paths of exceedancen− 1, which is equal to the number of paths of exceedancen− 2, and so on, down to zero. In other words, we have split up the set ofallmonotonic paths inton+ 1equally sized classes, corresponding to the possible exceedances between 0 andn. Since there are(2nn){\displaystyle \textstyle {2n \choose n}}monotonic paths, we obtain the desired formulaCn=1n+1(2nn).{\displaystyle \textstyle C_{n}={\frac {1}{n+1}}{2n \choose n}.} Figure 4 illustrates the situation forn= 3. Each of the 20 possible monotonic paths appears somewhere in the table. The first column shows all paths of exceedance three, which lie entirely above the diagonal. The columns to the right show the result of successive applications of the algorithm, with the exceedance decreasing one unit at a time. There are five rows, that isC3= 5, and the last column displays all paths no higher than the diagonal. Using Dyck words, start with a sequence from(2nn){\displaystyle \textstyle {\binom {2n}{n}}}. LetXd{\displaystyle X_{d}}be the firstXthat brings an initial subsequence to equality, and configure the sequence as(F)Xd(L){\displaystyle (F)X_{d}(L)}. The new sequence isLXF{\displaystyle LXF}. This proof uses the triangulation definition of Catalan numbers to establish a relation betweenCnandCn+1. Given a polygonPwithn+ 2sides and a triangulation, mark one of its sides as the base, and also orient one of its2n+ 1total edges. There are(4n+ 2)Cnsuch marked triangulations for a given base. Given a polygonQwithn+ 3sides and a (different) triangulation, again mark one of its sides as the base. Mark one of the sides other than the base side (and not an inner triangle edge). There are(n+ 2)Cn+ 1such marked triangulations for a given base. There is a simple bijection between these two marked triangulations: We can either collapse the triangle inQwhose side is marked (in two ways, and subtract the two that cannot collapse the base), or, in reverse, expand the oriented edge inPto a triangle and mark its new side. Thus Write4n−2n+1Cn−1=Cn.{\displaystyle \textstyle {\frac {4n-2}{n+1}}C_{n-1}=C_{n}.} Because we have Applying the recursion withC0=1{\displaystyle C_{0}=1}gives the result. This proof is based on theDyck wordsinterpretation of the Catalan numbers, soCn{\displaystyle C_{n}}is the number of ways to correctly matchnpairs of brackets. We denote a (possibly empty) correct string withcand its inverse withc'. Since anyccan be uniquely decomposed intoc=(c1)c2{\displaystyle c=(c_{1})c_{2}}, summing over the possible lengths ofc1{\displaystyle c_{1}}immediately gives the recursive definition Letbbe a balanced string of length2n, i.e.bcontains an equal number of({\displaystyle (}and){\displaystyle )}, soBn=(2nn){\displaystyle \textstyle B_{n}={2n \choose n}}. A balanced string can also be uniquely decomposed into either(c)b{\displaystyle (c)b}or)c′(b{\displaystyle )c'(b}, so Any incorrect (non-Catalan) balanced string starts withc){\displaystyle c)}, and the remaining string has one more({\displaystyle (}than){\displaystyle )}, so Also, from the definitions, we have: Therefore, as this is true for alln, This proof is based on theDyck wordsinterpretation of the Catalan numbers and uses thecycle lemmaof Dvoretzky and Motzkin.[11][12] We call a sequence of X's and Y'sdominatingif, reading from left to right, the number of X's is always strictly greater than the number of Y's. The cycle lemma[13]states that any sequence ofm{\displaystyle m}X's andn{\displaystyle n}Y's, wherem>n{\displaystyle m>n}, has preciselym−n{\displaystyle m-n}dominatingcircular shifts. To see this, arrange the given sequence ofm+n{\displaystyle m+n}X's and Y's in a circle. Repeatedly removing XY pairs leaves exactlym−n{\displaystyle m-n}X's. Each of these X's was the start of a dominating circular shift before anything was removed. For example, considerXXYXY{\displaystyle {\mathit {XXYXY}}}. This sequence is dominating, but none of its circular shiftsXYXYX{\displaystyle {\mathit {XYXYX}}},YXYXX{\displaystyle {\mathit {YXYXX}}},XYXXY{\displaystyle {\mathit {XYXXY}}}andYXXYX{\displaystyle {\mathit {YXXYX}}}are. A string is a Dyck word ofn{\displaystyle n}X's andn{\displaystyle n}Y's if and only if prepending an X to the Dyck word gives a dominating sequence withn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's, so we can count the former by instead counting the latter. In particular, whenm=n+1{\displaystyle m=n+1}, there is exactly one dominating circular shift. There are(2n+1n){\displaystyle \textstyle {2n+1 \choose n}}sequences with exactlyn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's. For each of these, only one of the2n+1{\displaystyle 2n+1}circular shifts is dominating. Therefore there are12n+1(2n+1n)=Cn{\displaystyle \textstyle {\frac {1}{2n+1}}{2n+1 \choose n}=C_{n}}distinct sequences ofn+1{\displaystyle n+1}X's andn{\displaystyle n}Y's that are dominating, each of which corresponds to exactly one Dyck word. Then×nHankel matrixwhose(i,j)entry is the Catalan numberCi+j−2hasdeterminant1, regardless of the value ofn. For example, forn= 4we have Moreover, if the indexing is "shifted" so that the(i,j)entry is filled with the Catalan numberCi+j−1then the determinant is still 1, regardless of the value ofn. For example, forn= 4we have Taken together, these two conditions uniquely define the Catalan numbers. Another feature unique to the Catalan–Hankel matrix is that then×nsubmatrix starting at2has determinantn+ 1. et cetera. The Catalan sequence was described in 1751 byLeonhard Euler, who was interested in the number of different ways of dividing a polygon into triangles. The sequence is named afterEugène Charles Catalan, who discovered the connection to parenthesized expressions during his exploration of theTowers of Hanoipuzzle. The reflection counting trick (second proof) for Dyck words was found byDésiré Andréin 1887. The name “Catalan numbers” originated fromJohn Riordan.[14] In 1988, it came to light that the Catalan number sequence had been used inChinaby the Mongolian mathematicianMingantuby 1730.[15][16]That is when he started to write his bookGe Yuan Mi Lu Jie Fa[The Quick Method for Obtaining the Precise Ratio of Division of a Circle], which was completed by his student Chen Jixin in 1774 but published sixty years later. Peter J. Larcombe (1999) sketched some of the features of the work of Mingantu, including the stimulus of Pierre Jartoux, who brought three infinite series to China early in the 1700s. For instance, Ming used the Catalan sequence to express series expansions ofsin⁡(2α){\displaystyle \sin(2\alpha )}andsin⁡(4α){\displaystyle \sin(4\alpha )}in terms ofsin⁡(α){\displaystyle \sin(\alpha )}. The Catalan numbers can be interpreted as a special case of theBertrand's ballot theorem. Specifically,Cn{\displaystyle C_{n}}is the number of ways for a candidate A withn+ 1votes to lead candidate B withnvotes. The two-parameter sequence of non-negative integers(2m)!(2n)!(m+n)!m!n!{\displaystyle {\frac {(2m)!(2n)!}{(m+n)!m!n!}}}is a generalization of the Catalan numbers. These are namedsuper-Catalan numbers, perIra Gessel. These should not confused with theSchröder–Hipparchus numbers, which sometimes are also called super-Catalan numbers. Form=1{\displaystyle m=1}, this is just two times the ordinary Catalan numbers, and form=n{\displaystyle m=n}, the numbers have an easy combinatorial description. However, other combinatorial descriptions are only known[17]form=2,3{\displaystyle m=2,3}and4{\displaystyle 4},[18]and it is an open problem to find a general combinatorial interpretation. Sergey Fominand Nathan Reading have given a generalized Catalan number associated to any finite crystallographicCoxeter group, namely the number of fully commutative elements of the group; in terms of the associatedroot system, it is the number of anti-chains (or order ideals) in the poset of positive roots. The classical Catalan numberCn{\displaystyle C_{n}}corresponds to the root system of typeAn{\displaystyle A_{n}}. The classical recurrence relation generalizes: the Catalan number of a Coxeter diagram is equal to the sum of the Catalan numbers of all its maximal proper sub-diagrams.[19] The Catalan numbers are a solution of a version of theHausdorff moment problem.[20] The Catalank-fold convolution, wherek=m, is:[21]
https://en.wikipedia.org/wiki/Catalan_number
Aneural circuitis a population ofneuronsinterconnected bysynapsesto carry out a specific function when activated.[1]Multiple neural circuits interconnect with one another to formlarge scale brain networks.[2] Neural circuits have inspired the design ofartificial neural networks, though there are significant differences. Early treatments of neuralnetworkscan be found inHerbert Spencer'sPrinciples of Psychology, 3rd edition (1872),Theodor Meynert'sPsychiatry(1884),William James'Principles ofPsychology(1890), andSigmund Freud's Project for a Scientific Psychology (composed 1895).[3]The first rule of neuronal learning was described byHebbin 1949, in theHebbian theory. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore either facilitate or inhibitsignal transmission. In 1959, theneuroscientists,Warren Sturgis McCullochandWalter Pittspublished the first works on the processing of neural networks.[4]They showed theoretically that networks of artificial neurons couldimplementlogical,arithmetic, andsymbolicfunctions. Simplifiedmodels of biological neuronswere set up, now usually calledperceptronsorartificial neurons. These simple models accounted forneural summation(i.e., potentials at the post-synaptic membrane will summate in thecell body). Later models also provided for excitatory and inhibitory synaptic transmission. The connections between neurons in the brain are much more complex than those of theartificial neuronsused in theconnectionistneural computing models ofartificial neural networks. The basic kinds of connections between neurons aresynapses: bothchemicalandelectrical synapses. The establishment of synapses enables the connection of neurons into millions of overlapping, and interlinking neural circuits. Presynaptic proteins calledneurexinsare central to this process.[5] One principle by which neurons work isneural summation–potentialsat thepostsynaptic membranewill sum up in the cell body. If thedepolarizationof the neuron at theaxon hillockgoes above threshold an action potential will occur that travels down theaxonto the terminal endings to transmit a signal to other neurons. Excitatory and inhibitory synaptic transmission is realized mostly byexcitatory postsynaptic potentials(EPSPs), andinhibitory postsynaptic potentials(IPSPs). On theelectrophysiologicallevel, there are various phenomena which alter the response characteristics of individual synapses (calledsynaptic plasticity) and individual neurons (intrinsic plasticity). These are often divided into short-term plasticity and long-term plasticity. Long-term synaptic plasticity is often contended to be the most likelymemorysubstrate. Usually, the term "neuroplasticity" refers to changes in the brain that are caused by activity or experience. Connections display temporal and spatial characteristics. Temporal characteristics refers to the continuously modified activity-dependent efficacy of synaptic transmission, calledspike-timing-dependent plasticity. It has been observed in several studies that the synaptic efficacy of this transmission can undergo short-term increase (calledfacilitation) or decrease (depression) according to the activity of the presynaptic neuron. The induction of long-term changes in synaptic efficacy, bylong-term potentiation(LTP) ordepression(LTD), depends strongly on the relative timing of the onset of theexcitatory postsynaptic potentialand the postsynaptic action potential. LTP is induced by a series of action potentials which cause a variety of biochemical responses. Eventually, the reactions cause the expression of new receptors on the cellular membranes of the postsynaptic neurons or increase the efficacy of the existing receptors throughphosphorylation. Backpropagating action potentials cannot occur because after an action potential travels down a given segment of the axon, them gatesonvoltage-gated sodium channelsclose, thus blocking any transient opening of theh gatefrom causing a change in the intracellular sodium ion (Na+) concentration, and preventing the generation of an action potential back towards the cell body. In some cells, however,neural backpropagationdoes occur through thedendritic branchingand may have important effects on synaptic plasticity and computation. A neuron in the brain requires a single signal to aneuromuscular junctionto stimulate contraction of the postsynaptic muscle cell. In the spinal cord, however, at least 75afferentneurons are required to produce firing. This picture is further complicated by variation in time constant between neurons, as some cells can experience theirEPSPsover a wider period of time than others. While in synapses in thedeveloping brainsynaptic depression has been particularly widely observed it has been speculated that it changes to facilitation in adult brains. Neural connections are built and maintained primarily by glia.Astrocytes, a type of glial cell, have been implicated for their influence onsynaptogenesis. The presence of astrocytes, in rat retinal ganglion cell (RGC) cultures, increased synaptic growth, suggesting that it plays role in the process. Through signals between synapses and astrocytes, the number of synapses are regulated as neuronal circuits as they develop. Additionally, they release proteins to maintain homeostatic plasticity for the entire circuit and the synapse itself.[6] Early life adversity(ELA) during critical periods of development can influence circuitry. People exposed to several adverse life events undergo changes in connectivity that shape fear perception and cognition. In opposition to those with a lack of ELA, amygdala volume is lower which is associated with possible issues in emotional control. Also, stress in youth can irreversibly modify previously existing connection between the hippocampus, medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC). The interactions between these brain regions are critical to proper cognitive function. ELA poses a risk to normal working memory, learning memory, and other executive functions by reconstructing circuitry.[7] An example of a neural circuit is thetrisynaptic circuitin thehippocampus. Another is thePapez circuitlinking thehypothalamusto thelimbic lobe. There are several neural circuits in thecortico-basal ganglia-thalamo-cortical loop. These circuits carry information between the cortex,basal ganglia, thalamus, and back to the cortex. The largest structure within the basal ganglia, thestriatum, is seen as having its own internal microcircuitry.[8] Neural circuits in thespinal cordcalledcentral pattern generatorsare responsible for controlling motor instructions involved in rhythmic behaviours. Rhythmic behaviours include walking,urination, andejaculation. The central pattern generators are made up of different groups ofspinal interneurons.[9] There are four principal types of neural circuits that are responsible for a broad scope of neural functions. These circuits are a diverging circuit, a converging circuit, a reverberating circuit, and a parallel after-discharge circuit.[10]Circuits can also be classified as forms of feedforward excitation, feedforward inhibition, lateral inhibition, and mutual inhibition. Diverging and converging circuits are a type of feedforward excitation. Feedforward excitation refers to the method of travel taken by neuronal signals. It involves a downstream transfer of information.[11] In a diverging circuit, one neuron synapses with a number of postsynaptic cells. Each of these may synapse with many more making it possible for one neuron to stimulate up to thousands of cells. This is exemplified in the way that thousands of muscle fibers can be stimulated from the initial input from a singlemotor neuron.[10]In a converging circuit, inputs from many sources are converged into one output, affecting just one neuron or a neuron pool. This type of circuit is exemplified in therespiratory centerof thebrainstem, which responds to a number of inputs from different sources by giving out an appropriate breathing pattern.[10] A reverberating circuit produces a repetitive output. In a signalling procedure from one neuron to another in a linear sequence, one of the neurons may send a signal back to initiating neuron. Each time that the first neuron fires, the other neuron further down the sequence fire again sending it back to the source. This restimulates the first neuron and also allows the path of transmission to continue to its output. A resulting repetitive pattern is the outcome that only stops if one or more of the synapses fail, or if an inhibitory feed from another source causes it to stop. This type of reverberating circuit is found in the respiratory center that sends signals to therespiratory muscles, causing inhalation. When the circuit is interrupted by an inhibitory signal the muscles relax causing exhalation. This type of circuit may play a part inepileptic seizures.[10] In a parallel after-discharge circuit, a neuron inputs to several chains of neurons. Each chain is made up of a different number of neurons but their signals converge onto one output neuron. Each synapse in the circuit acts to delay the signal by about 0.5 msec, so that the more synapses there are, the longer is the delay to the output neuron. After the input has stopped, the output will go on firing for some time. This type of circuit does not have a feedback loop as does the reverberating circuit. Continued firing after the stimulus has stopped is calledafter-discharge. This circuit type is found in thereflex arcsof certainreflexes.[10] Differentneuroimagingtechniques have been developed to investigate the activity of neural circuits and networks. The use of "brain scanners" or functional neuroimaging to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high-resolution pictures, or by examining the relative activations of different brain areas. Such technologies may includefunctional magnetic resonance imaging(fMRI),brain positron emission tomography(brain PET), andcomputed axial tomography(CAT) scans.Functional neuroimaginguses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially fMRI, which measureshemodynamic activity(usingBOLD-contrast imaging) which is closely linked to neural activity, PET, andelectroencephalography(EEG) is used. Connectionistmodels serve as a test platform for different hypotheses of representation, information processing, and signal transmission. Lesioning studies in such models, e.g.artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network performs, can also yield important insights in the working of several cell assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia ofParkinson'spatients) can yield insights into the underlying mechanisms for patterns of cognitive deficits observed in the particular patient group. Predictions from these models can be tested in patients or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process iterative. The modern balance between the connectionist approach and the single-cell approach inneurobiologyhas been achieved through a lengthy discussion. In 1972, Barlow announced thesingle neuron revolution: "our perceptions are caused by the activity of a rather small number of neurons selected from a very large population of predominantly silent cells."[12]This approach was stimulated by the idea ofgrandmother cellput forward two years earlier. Barlow formulated "five dogmas" of neuron doctrine. Recent studies of 'grandmother cell' and sparse coding phenomena develop and modify these ideas.[13]The single cell experiments used intracranial electrodes in the medial temporal lobe (the hippocampus and surrounding cortex). Modern development ofconcentration of measuretheory (stochastic separation theorems) with applications toartificial neural networksgive mathematical background to unexpected effectiveness of small neural ensembles in high-dimensional brain.[14] Disruptions to neural circuitry caused by changes in neurons and neural networks can lead to the pathogenesis of mental illnesses and neurodegenerative diseases. Modifications to thebasal gangliaare often associated with diseases such as inParkinson's disease.[15]Moreover, the elimination of dendritic spines in dopaminergic neurons in thesubstantia nigraand medium spiny neurons from the striatum which are located in the basal ganglia. Methods like calcium imaging have identified dopamine receptors, D1 and D2, to be involved in the regulation of dendritic spine loss and formation. The removal of dendritic spine presence in neurons negatively impacts synaptic plasticity, learning, memory development, and overall cognitive function.[16] In early stages of Alzheimer's disease and individuals with mild cognitive impairment, synaptic removal and alterations to typical dendritic spine structure have been observed. Abnormalities to dendritic morphology include damage to neurites and spine loss. This can extend to the axon and trigger a progressive shrinking process. Variations in the expression levels of Alzheimer's disease-related proteins include β-secretase, γ-secretase, and amyloid plaque also alter dendritic spine density. Closer proximity to these proteins further contributes to dendritic dissimilarities.[16]
https://en.wikipedia.org/wiki/Neural_circuit
Anonymity[a]describes situations where the acting person's identity is unknown. Anonymity may be created unintentionally through the loss of identifying information due to the passage of time or a destructive event, or intentionally if a person chooses to withhold their identity. There are various situations in which a person might choose to remain anonymous. Acts ofcharityhave been performed anonymously when benefactors do not wish to be acknowledged. A person who feels threatened might attempt to mitigate that threat through anonymity. A witness to a crime might seek to avoid retribution, for example, by anonymously calling a crime tipline. In many other situations (like conversation between strangers, or buying some product or service in a shop), anonymity is traditionally accepted as natural. Some writers have argued that the term "namelessness", though technically correct, does not capture what is more centrally at stake in contexts of anonymity. The important idea here is that a person benon-identifiable, unreachable, or untrackable.[1]Anonymity is also seen as a way to realize certain other values, such asprivacyor liberty. An important example of anonymity being not only protected, but enforced, by law is in voting infree elections. Criminals might proceed anonymously to conceal their participation in a crime. In certain situations, however, it may be illegal to remain anonymous. For example,24 of the U.S. states have "stop and identify" statutesthat require persons detained to self-identify when requested by a law enforcement officer, when the person is reasonably suspected of committing a crime. Over the past few years, anonymity tools used on thedark webby criminals and malicious users have drastically altered the ability of law enforcement to use conventional surveillance techniques.[2][3] The term "anonymous message" typically refers to a message that does not reveal its sender. In many countries, anonymous letters are protected by law and must be delivered as regular letters. Inmathematics, in reference to an arbitrary element (e.g., a human, an object, acomputer), within a well-definedset(called the "anonymity set"), "anonymity" of that element refers to the property of that element of not being identifiable within this set. If it is not identifiable, then the element is said to be "anonymous". The word anonymous was borrowed into English around 1600 from the Late Latin word "anonymus", from Ancient Greek ᾰ̓νώνῠμος (anṓnumos, "without name"), from ᾰ̓ν- (an-, "un-") with ὄνῠμᾰ (ónuma), Aeolic and Doric dialectal form of ὄνομᾰ (ónoma, "name"). Sometimes a person may desire a long-term relationship (such as a reputation) with another party without necessarily disclosingpersonally identifying informationto that party. In this case, it may be useful for the person to establish a unique identifier, called apseudonym. Examples of pseudonyms arepen names,nicknames,credit card numbers, student numbers,bank accountnumbers, etc. A pseudonym enables the other party to link different messages from the same person and, thereby, to establish a long-term relationship. Pseudonyms are widely used insocial networksand other virtual communication, although recently some important service providers like Google try to discourage pseudonymity.[4][circular reference]Someone using a pseudonym would be strictly considered to be using "pseudonymity" not "anonymity", but sometimes the latter is used to refer to both (in general, a situation where the legal identity of the person is disguised). Anonymity may reduce the accountability one perceives to have for their actions, and removes the impact these actions might otherwise have on their reputation. This can have dramatic effects, both useful and harmful to various parties involved. Thus, it may be used for psychological tactics involving any respective party to purport or support or discredit any sort of activity or belief. In conversational settings, anonymity may allow people to reveal personal history and feelings without fear of later embarrassment. Electronic conversational media can provide physical isolation, in addition to anonymity. This prevents physical retaliation for remarks, and prevents negative ortaboobehavior or discussion from tarnishing the reputation of the speaker. This can be beneficial when discussing very private matters, or taboo subjects or expressing views or revealing facts that may put someone in physical, financial, or legal danger (such asillegalactivity, or unpopular, or outlawed political views). In work settings, the three most common forms of anonymous communication are traditional suggestion boxes, written feedback, andCaller IDblocking. Additionally, the appropriateness of anonymous organizational communication varies depending on the use, with organizational surveys or assessments typically perceived as highly appropriate and firing perceived as highly inappropriate. Anonymity use and appropriateness have also been found to be significantly related to the quality of relationships with key others at work.[5] With few perceived negative consequences, anonymous or semi-anonymous forums often provide a soapbox for disruptive conversational behavior. The term "troll" is sometimes used to refer to those who engage in such disruptive behavior. Relative anonymity is often enjoyed in large crowds. Different people have different psychological and philosophical reactions to this development, especially as a modern phenomenon. This anonymity is an important factor incrowd psychology, and behavior in situations such as ariot. This perceived anonymity can be compromised by technologies such asphotography.Groupthinkbehavior andconformityare also considered to be an established effect of internet anonymity.[6] Anonymity also permits highly trained professionals such asjudgesto freely express themselves regarding the strategies they employ to perform their jobs objectively.[7] Anonymous commercial transactions can protect the privacy of consumers. Some consumers prefer to use cash when buying everyday goods (like groceries or tools), to prevent sellers from aggregating information or soliciting them in the future. Credit cards are linked to a person's name, and can be used to discover other information, such as postal address, phone number, etc. Theecashsystem was developed to allow secure anonymous transactions. Another example would be Enymity, which actually makes a purchase on a customer's behalf. When purchasing taboo goods and services, anonymity makes many potential consumers more comfortable with or more willing to engage in the transaction. Manyloyalty programsuse cards that personally identify the consumer engaging in each transaction (possibly for later solicitation, or for redemption or security purposes), or that act as a numericalpseudonym, for use indata mining. Anonymity can also be used as a protection against legal prosecution. For example, when committing unlawful actions, many criminals attempt to avoid identification by the means of obscuring/covering their faces withscarvesormasks, and wearglovesor other hand coverings in order to not leave anyfingerprints. Inorganized crime, groups of criminals may collaborate on a certain project without revealing to each other their names or other personally identifiable information. The movieThe Thomas Crown Affairdepicted a fictional collaboration by people who had never previously met and did not know who had recruited them. The anonymous purchase of a gun or knife to be used in a crime helps prevent linking an abandoned weapon to the identity of the perpetrator. There are two aspects, one, giving to a large charitable organization obscures the beneficiary of a donation from the benefactor, the other is giving anonymously to obscure the benefactor both from the beneficiary and from everyone else. Anonymous charity has long been a widespread and durable moral precept of many ethical and religious systems, as well as being in practice a widespread human activity. A benefactor may not wish to establish any relationship with the beneficiary, particularly if the beneficiary is perceived as being unsavory.[8][citation needed]Benefactors may not wish to identify themselves as capable of giving. A benefactor may wish to improve the world, as long as no one knows who did it, out of modesty, wishing to avoid publicity.[9]Another reason for anonymous charity is a benefactor who does not want a charitable organization to pursue them for more donations, sometimes aggressively. Attempts at anonymity are not always met with support from society. Anonymity sometimes clashes with the policies and procedures of governments or private organizations. In the United States, disclosure of identity is required to be able tovote, though thesecret ballotprevents disclosure of individual voting patterns. Inairportsin most countries, passengers are not allowed to board flights unless they have identified themselves to airline or transportation security personnel, typically in the form of the presentation of anidentification card. On the other hand, some policies and procedures require anonymity. Stylometricidentification of anonymous authors by writing style is a potential risk, which is expected to grow as analytic techniques improve and computing power andtext corporagrow. Authors may resist such identification by practicingadversarial stylometry.[10] When it is necessary to refer to someone who is anonymous, it is typically necessary to create a type of pseudo-identification for that person. In literature, the most common way to state that the identity of an author is unknown is to refer to them as simply "Anonymous". This is usually the case with older texts in which the author is long dead and unable to claim authorship of a work. When the work claims to be that of some famous author thepseudonymousauthor is identified as "Pseudo-", as inPseudo-Dionysius the Areopagite, an author claiming—and long believed—to beDionysius the Areopagite, an early Christian convert. Anonymus, in itsLatinspelling, generally with a specific city designation, is traditionally used by scholars in the humanities to refer to an ancient writer whose name is not known, or to a manuscript of their work. Many such writers have left valuable historical or literary records: an incomplete list of suchAnonymiis atAnonymus. In thehistory of art, many painting workshops can be identified by their characteristic style and discussed and the workshop's output set in chronological order. Sometimes archival research later identifies the name, as when the "Master of Flémalle"—defined by three paintings in the Städelsches Kunstinstitut inFrankfurt— was identified asRobert Campin. The 20th-century art historianBernard Berensonmethodically identified numerous early Renaissance Florentine and Sienese workshops under suchsobriquetsas "Amico di Sandro" for an anonymous painter in the immediate circle ofSandro Botticelli. In legal cases, a popularly accepted name to use when it is determined that an individual needs to maintain anonymity is "John Doe". This name is often modified to "Jane Doe" when the anonymity-seeker is female. The same names are also commonly used when the identification of a dead person is not known. The semi-acronym Unsub is used as law enforcement slang for "Unknown Subject of an Investigation". Themilitaryoften feels a need to honor the remains of soldiers for whom identification is impossible. In many countries, such a memorial is named theTomb of the Unknown Soldier. Most modern newspapers and magazines attribute their articles to individual editors, or tonews agencies. An exception is the Markker weeklyThe Economist.AllBritish newspapersrun their leaders, oreditorials, anonymously.The Economistfully adopts this policy, saying "Many hands writeThe Economist, but it speaks with a collective voice".[11]Guardian considers that"people will often speak more honestly if they are allowed to speak anonymously".[12][13]According to Ross Eaman, in his bookThe A to Z of Journalism, until the mid-19th century, most writers in Great Britain, especially the less well known, did not sign their names to their work in newspapers, magazines and reviews.[14] Most commentary on the Internet is essentially done anonymously, using unidentifiable pseudonyms. However, this has been widely discredited in a study by the University of Birmingham, which found that the number of people who use the internet anonymously is statistically the same as the number of people who use the internet to interact with friends or known contacts. While these usernames can take on an identity of their own, they are sometimes separated and anonymous from the actual author. According to the University of Stockholm this is creating more freedom of expression, and less accountability.[15]Wikipediais collaboratively written mostly by authors using either unidentifiable pseudonyms orIP addressidentifiers, although many Wikipedia editors use their real names instead of pseudonyms. However, the Internet was not designed for anonymity:IP addressesserve as virtual mailing addresses, which means that any time any resource on the Internet is accessed, it is accessed from a particular IP address, and the data traffic patterns to and from IP addresses can be intercepted, monitored, and analysed, even if the content of that traffic is encrypted. This address can be mapped to a particularInternet Service Provider(ISP), and this ISP can then provide information about what customer that IP address was leased to. This does not necessarily implicate a specific individual (because other people could be using that customer's connection, especially if the customer is a public resource, such as a library), but it provides regional information and serves as powerfulcircumstantial evidence.[citation needed] Anonymizing services such asI2PandToraddress the issue of IP tracking. In short, they work by encrypting packets within multiple layers of encryption. The packet follows a predetermined route through the anonymizing network. Each router sees the immediate previous router as the origin and the immediate next router as the destination. Thus, no router ever knows both the true origin and destination of the packet. This makes these services more secure than centralized anonymizing services (where a central point of knowledge exists).[16] Sites such asChatroulette,Omegle, andTinder(which pair up random users for a conversation) capitalized on a fascination with anonymity. Apps likeYik Yak,SecretandWhisperlet people share things anonymously or quasi-anonymously whereasRandomlet the user to explore the web anonymously. Some email providers, likeTutaalso offer the ability to create anonymous email accounts which do not require any personal information from the account holder.[17]Other sites, however, includingFacebookandGoogle+, ask users to sign in with their legal names. In the case of Google+, this requirement led to a controversy known as thenymwars.[18] The prevalence ofcyberbullyingis often attributed to relative Internet anonymity, due to the fact that potential offenders are able to mask their identities and prevent themselves from being caught. A principal in a high school stated that comments made on these anonymous sites are "especially vicious and hurtful since there is no way to trace their source and it can be disseminated widely.[19]"Cyberbullying, as opposed to general bullying, is still a widely-debated area ofInternet freedomin several states.[20] Though Internet anonymity can provide a harmful environment through which people can hurt others, anonymity can allow for a much safer and relaxed internet experience. In a study conducted at Carnegie Mellon University, 15 out of 44 participants stated that they choose to be anonymous online because of a prior negative experience during which they did not maintain an anonymous presence.[21]Such experiences include stalking, releasing private information by an opposing school political group, or tricking an individual into traveling to another country for a job that did not exist. Participants in this study stated that they were able to avoid their previous problems by using false identification online.[citation needed] David Chaumis called the Godfathers of anonymity and he has a claim to be one of the great visionaries of contemporary science. In the early 1980s, while a computer scientist at Berkeley, Chaum predicted the world in which computer networks would make mass surveillance a possibility. As Dr. Joss Wright explains: "David Chaum was very ahead of his time. He predicted in the early 1980s concerns that would arise on the internet 15 or 20 years later."[22]There are some people though that consider anonymity in the Internet as a danger for our society as a whole. David Davenport, an assistant professor in the Computer Engineering Department of Bilkent University in Ankara, Turkey, considers that by allowing anonymous Net communication, the fabric of our society is at risk.[23]"Accountability requires those responsible for any misconduct be identified and brought to justice. However, if people remain anonymous, by definition, they cannot be identified, making it impossible to hold them accountable." he says.[24] AsA. Michael Froomkinsays: "The regulation of anonymous and pseudonymous communications promises to be one of the most important and contentious Internet-related issues of the next decade".[25][26]Anonymity and pseudonymity can be used for good and bad purposes. And anonymity can in many cases be desirable for one person and not desirable for another person. A company may, for example, not like an employee to divulge information about improper practices within the company, but society as a whole may find it important that such improper practices are publicly exposed. Good purposes of anonymity and pseudonymity:[citation needed] There has always, however, also been a negative side of anonymity: The border between illegal and legal but offensive use is not very sharp, and varies depending on the law in each country.[32] Anonymous(used as a mass noun) is a loosely associated international network of activist andhacktivistentities. A website nominally associated with the group describes it as "an internet gathering" with "a very loose and decentralized command structure that operates on ideas rather than directives".[33]The group became known for a series of well-publicized publicity stunts and distributeddenial-of-service (DDoS)attacks on government, religious, and corporate websites. An image commonly associated with Anonymous is the "man without a head" represents leaderless organization and anonymity.[34] Anonymity is perceived as a right by many, especially the anonymity in the internet communications. The partial right for anonymity is legally protected to various degrees in different jurisdictions. The tradition of anonymous speech is older than the United States. FoundersAlexander Hamilton,James Madison, andJohn JaywroteThe Federalist Papersunder the pseudonym "Publius" and "the Federal Farmer" spoke up in rebuttal. TheUS Supreme Courthas repeatedly[35][36][37]recognized rights to speak anonymously derived from theFirst Amendment. The pressure on anonymous communication has grown substantially after the2001 terrorist attackon theWorld Trade Centerand the subsequent new political climate. Although it is still difficult to oversee their exact implications, measures such as theUS Patriot Act, the European Cybercrime Convention and theEuropean Unionrules ondata retentionare only few of the signs that the exercise of the right to the anonymous exchange of information is under substantial pressure.[41] An above-mentioned 1995 Supreme Court ruling inMcIntyre v. Ohio Elections Commissionreads:[42]"(...) protections for anonymous speech are vital to democratic discourse. Allowing dissenters to shield their identities frees them to express critical minority views . . . Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society." However, anonymous online speech is not without limits. It is clearly demonstrated in a case from 2008, one in which the defendant stated on a law-school discussion board that two women should be raped, an anonymous poster's comments may extend beyond free speech protections.[43]In the case, a Connecticut federal court must apply a standard to decide whether the poster's identity should be revealed. There are several tests, however, that the court could apply when considering this issue.[44][45] The right to internet anonymity is also covered by European legislation that recognizes the fundamental right todata protection,freedom of expression, freedom of impression. TheEuropean Union Charter of Fundamental Rightsrecognizes in Article. 8 (Title II: "Freedoms")[46]the right of everyone to protection of personal data concerning him.[47]The right to privacy is now essentially the individual's right to have and to maintain control over information about him. One of the most controversial international legal acts, regarding this subject isAnti-Counterfeiting Trade Agreement (ACTA). As of February 2015, the treaty was signed -but not all ratified- by 31 states as well as the European Union. Japan was on 4 October 2012 the first to ratify the treaty. It creates an international regime for imposing civil and criminal penalties on Internet counterfeiting and copyright infringement. Although ACTA is intentionally vague, leaving signatories to draw precise rules themselves, critics say it could mean innocent travellers having their laptops searched for unlicensed music, or being jailed for carrying ageneric drug. Infringers could be liable for the total loss of potential sales (implying that everyone who buys a counterfeit product would have bought the real thing). It applies to unintentional use of copyright material. It puts the onus on website owners to ensure they comply with laws across several territories. It has been negotiated secretively and outside established international trade bodies, despite EU criticisms.[48] The history of anonymous expression in political dissent is both long and with important effect, as in theLetters of JuniusorVoltaire'sCandide, or scurrilous as inpasquinades. In the tradition of anonymous British political criticism,The Federalist Paperswere anonymously authored by three of America'sFounding Fathers. Without the public discourse on the controversial contents of theU.S. Constitution, ratification would likely have taken much longer as individuals worked through the issues. TheUnited States Declaration of Independence, however, was not anonymous. If it had been unsigned, it might well have been less effective.John Perry Barlow,Joichi Ito, and other U.S.bloggersexpress a very strong support for anonymous editing as one of the basic requirements ofopen politicsas conducted on the Internet.[49] Anonymity is directly related to the concept ofobscurantismorpseudonymity, where an artist or a group attempts to remain anonymous, for various reasons such as adding an element of mystique to themselves or their work, attempting to avoid what is known as the "cult of personality" orhero worship(in which thecharisma, good looks, wealth or other unrelated or mildly related aspects of the people is the main reason for interest in their work, rather than the work itself) or to break into a field or area of interest normally dominated by males (as by the famousscience fictionauthorJames Tiptree, Jrwho was actually a woman named Alice Bradley Sheldon, and likelyJT LeRoy). Some seem to want to avoid the "limelight" of popularity and to live private lives, such asThomas Pynchon,J. D. Salinger,De Onbekende Beeldhouwer(an anonymous sculptor whose exhibited work inAmsterdamattracted strong attention in the 1980s and 1990s[50]), and by DJ duoDaft Punk(1993-2021). For street artistBanksy, "anonymity is vital to him because graffiti is illegal".[51] Anonymity has been used in music by avant-garde ensembleThe Residents,Jandek(until 2004), costumed comedy rock bandThe Radioactive Chicken Heads, and DJsDeadmau5(1998–present) andMarshmello(2015–present). This is frequently applied in fiction, fromThe Lone Ranger,Superman, andBatman, where a hidden identity is assumed. Suppose that onlyAlice, Bob, and Carolhave keys to a bank safe and that, one day, contents of the safe go missing (lock not violated). Without additional information, we cannot know for sure whether it was Alice, Bob or Carol who emptied the safe. Notably, each element in {Alice, Bob, Carol} could be the perpetrator with a probability of 1. However, as long as none of them was convicted with 100% certainty, we must hold that the perpetrator remains anonymous and that the attribution of the probability of 1 to one of the players has to remain undecided. If Carol has a definite alibi at the time of perpetration, then we may deduce that it must have been either Alice or Bob who emptied the safe. In this particular case, the perpetrator is not completely anonymous anymore, as both Alice and Bob now know "who did it" with a probability of 1.
https://en.wikipedia.org/wiki/Anonymity
Computer Gois the field ofartificial intelligence(AI) dedicated to creating acomputer programthat plays the traditionalboard gameGo. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such asalpha-beta minimaxthat performed well as AIs forcheckersandchessfell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation ofhuman-like AI. The application ofMonte Carlo tree searchto Go algorithms provided a notable improvement in the late2000s decade, with programs finally able to achieve alow-dan level: that of an advanced amateur. High-dan amateurs and professionals could still exploit these programs' weaknesses and win consistently, but computer performance had advanced past the intermediate (single-digitkyu) level. The tantalizing unmet goal of defeating the best human players without a handicap, long thought unreachable, brought a burst of renewed interest. The key insight proved to be an application ofmachine learninganddeep learning.DeepMind, aGoogleacquisition dedicated to AI research, producedAlphaGoin 2015 and announced it to the world in 2016.AlphaGo defeated Lee Sedol, a 9 dan professional, in a no-handicap match in 2016, thendefeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years. Just ascheckers had fallen to machines in 1995andchess in 1997, computer programs finally conquered humanity's greatest Go champions in 2016–2017. DeepMind did not release AlphaGo for public use, but various programs have been built since based on the journal articles DeepMind released describing AlphaGo and its variants. Professional Go players see the game as requiring intuition, creative and strategic thinking.[1][2]It has long been considered a difficult challenge in the field ofartificial intelligence(AI) and is considerably more difficult to solve thanchess.[3]Many in the field considered Go to require more elements that mimic human thought than chess.[4]MathematicianI. J. Goodwrote in 1965:[5] Go on a computer? – In order to programme a computer to play a reasonable game of Go, rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning programme. The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess. Prior to 2015, the best Go programs only managed to reachamateur danlevel.[6][7]On the small 9×9 board, the computer fared better, and some programs managed to win a fraction of their 9×9 games against professional players. Prior to AlphaGo, some researchers had claimed that computers would never defeat top humans at Go.[8] The first Go program was written byAlbert Lindsey Zobristin 1968 as part of his thesis onpattern recognition.[9]It introduced aninfluence functionto estimate territory andZobrist hashingto detectko. In April 1981, Jonathan K Millen published an article inBytediscussing Wally, a Go program with a 15x15 board that fit within theKIM-1microcomputer's 1K RAM.[10]Bruce F. Websterpublished an article in the magazine in November 1984 discussing a Go program he had written for theApple Macintosh, including theMacFORTHsource.[11]Programs for Go were weak; a 1983 article estimated that they were at best equivalent to 20kyu, the rating of a naive novice player, and often restricted themselves to smaller boards.[12]AIs who played on theInternet Go Server (IGS)on 19x19 size boards had around 20–15kyustrength in 2003, after substantial improvements in hardware.[13] In 1998, very strong players were able to beat computer programs while giving handicaps of 25–30 stones, an enormous handicap that few human players would ever take. There was a case in the 1994 World Computer Go Championship where the winning program, Go Intellect, lost all three games against the youth players while receiving a 15-stone handicap.[14]In general, players who understood and exploited a program's weaknesses could win even through large handicaps.[15] In 2006 (with an article published in 2007),Rémi Coulomproduced a new algorithm he calledMonte Carlo tree search.[16]In it, a game tree is created as usual of potential futures that branch with every move. However, computers "score" a terminal leaf of the tree by repeated random playouts (similar toMonte Carlostrategies for other problems). The advantage is that such random playouts can be done very quickly. The intuitive objection - that random playouts do not correspond to the actual worth of a position - turned out not to be as fatal to the procedure as expected; the "tree search" side of the algorithm corrected well enough for finding reasonable future game trees to explore. Programs based on this method such as MoGo and Fuego saw better performance than classic AIs from earlier. The best programs could do especially well on the small 9x9 board, which had fewer possibilities to explore. In 2009, the first such programs appeared which could reach and hold lowdan-level rankson theKGS Go Serveron the 19x19 board. In 2010, at the 2010 European Go Congress in Finland, MogoTW played 19x19 Go againstCatalin Taranu(5p). MogoTW received a seven-stone handicap and won.[17] In 2011,Zenreached 5 dan on the server KGS, playing games of 15 seconds per move. The account which reached that rank uses a cluster version of Zen running on a 26-core machine.[18] In 2012, Zen beatTakemiya Masaki(9p) by 11 points at five stones handicap, followed by a 20-point win at four stones handicap.[19] In 2013,Crazy StonebeatYoshio Ishida(9p) in a 19×19 game at four stones handicap.[20] The 2014 Codecentric Go Challenge, a best-of-five match in an even 19x19 game, was played between Crazy Stone and Franz-Jozef Dickhut (6d). No stronger player had ever before agreed to play a serious competition against a go program on even terms. Franz-Jozef Dickhut won, though Crazy Stone won the first match by 1.5 points.[21] AlphaGo, developed byGoogle DeepMind, was a significant advance in computer strength compared to previous Go programs. It used techniques that combineddeep learningandMonte Carlo tree search.[22]In October 2015, it defeatedFan Hui, the European Go champion, five times out of five in tournament conditions.[23]In March 2016, AlphaGo beatLee Sedolin the first three of five matches.[24]This was the first time that a9-danmaster had played a professional game against a computer without handicap.[25]Lee won the fourth match, describing his win as "invaluable".[26]AlphaGo won the final match two days later.[27][28]With this victory, AlphaGo became the first program to beat a 9 dan human professional in a game without handicaps on a full-sized board. In May 2017, AlphaGo beatKe Jie, who at the time was ranked top in the world,[29][30]in athree-game matchduring theFuture of Go Summit.[31] In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.[32] After the basic principles of AlphaGo were published in the journalNature, other teams have been able to produce high-level programs. Work on Go AI since has largely consisted of emulating the techniques used to build AlphaGo, which proved so much stronger than everything else. By 2017, bothZenandTencent's projectFine Artwere capable of defeating very high-level professionals some of the time. The open sourceLeela Zeroengine was created as well. For a long time, it was a widely held opinion that computer Go posed a problem fundamentally different fromcomputer chess. Many considered a strong Go-playing program something that could be achieved only in the far future, as a result of fundamental advances in general artificial intelligence technology. Those who thought the problem feasible believed that domain knowledge would be required to be effective against human experts. Therefore, a large part of the computer Go development effort was during these times focused on ways of representing human-like expert knowledge and combining this with local search to answer questions of a tactical nature. The result of this were programs that handled many specific situations well but which had very pronounced weaknesses in their overall handling of the game. Also, these classical programs gained almost nothing from increases in available computing power. Progress in the field was generally slow. The large board (19×19, 361 intersections) is often noted as one of the primary reasons why a strong program is hard to create. The large board size prevents analpha-beta searcherfrom achieving deep look-ahead without significant search extensions orpruningheuristics. In 2002, a computer program called MIGOS (MIni GO Solver) completely solved the game of Go for the 5×5 board. Black wins, taking the whole board.[33] Continuing the comparison to chess, Go moves are not as limited by the rules of the game. For the first move in chess, the player has twenty choices. Go players begin with a choice of 55 distinct legal moves, accounting for symmetry. This number rises quickly as symmetry is broken, and soon almost all of the 361 points of the board must be evaluated. One of the most basic tasks in a game is to assess a board position: which side is favored, and by how much? In chess, many future positions in a tree are direct wins for one side, and boards have a reasonable heuristic for evaluation in simple material counting, as well as certain positional factors such as pawn structure. A future where one side has lost their queen for no benefit clearly favors the other side. These types of positional evaluation rules cannot efficiently be applied to Go. The value of a Go position depends on a complex analysis to determine whether or not the group is alive, which stones can be connected to one another, and heuristics around the extent to which a strong position has influence, or the extent to which a weak position can be attacked. A stone placed might not have immediate influence, but after many moves could become highly important in retrospect as other areas of the board take shape. Poor evaluation of board states will cause the AI to work toward positions it incorrectly believes favor it, but actually do not. One of the main concerns for a Go player is which groups of stones can be kept alive and which can be captured. This general class of problems is known aslife and death. Knowledge-based AI systems sometimes attempted to understand the life and death status of groups on the board. The most direct approach is to perform atree searchon the moves which potentially affect the stones in question, and then to record the status of the stones at the end of the main line of play. However, within time and memory constraints, it is not generally possible to determine with complete accuracy which moves could affect the 'life' of a group of stones. This implies that someheuristicmust be applied to select which moves to consider. The net effect is that for any given program, there is a trade-off between playing speed and life and death reading abilities. An issue that all Go programs must tackle is how to represent the current state of the game. The most direct way of representing a board is as a one- or two-dimensional array, where elements in the array represent points on the board, and can take on a value corresponding to a white stone, a black stone, or an empty intersection. Additional data is needed to store how many stones have been captured, whose turn it is, and which intersections are illegal due to theKo rule. In general, machine learning programs stop there at this simplest form and let the organic AIs come to their own understanding of the meaning of the board, likely simply using Monte Carlo playouts to "score" a board as good or bad for a player. "Classic" AI programs that attempted to directly model a human's strategy might go further, however, such as layering on data such as stones believed to be dead, stones that are unconditionally alive, stones in asekistate of mutual life, and so forth in their representation of the state of the game. Historically,symbolic artificial intelligencetechniques have been used to approach the problem of Go AI.Neural networksbegan to be tried as an alternative approach in the 2000s decade, as they required immense computing power that was expensive-to-impossible to reach in earlier decades. These approaches attempt to mitigate the problems of the game of Go having a highbranching factorand numerous other difficulties. The only choice a program needs to make is where to place its next stone. However, this decision is made difficult by the wide range of impacts a single stone can have across the entire board, and the complex interactions various stones' groups can have with each other. Various architectures have arisen for handling this problem. Popular techniques and design philosophies include: Onetraditional AItechnique for creating game playing software is to use aminimaxtree search. This involves playing out all hypothetical moves on the board up to a certain point, then using anevaluation functionto estimate the value of that position for the current player. The move which leads to the best hypothetical board is selected, and the process is repeated each turn. While tree searches have been very effective incomputer chess, they have seen less success in Computer Go programs. This is partly because it has traditionally been difficult to create an effective evaluation function for a Go board, and partly because the large number of possible moves each side can make each leads to a highbranching factor. This makes this technique very computationally expensive. Because of this, many programs which use search trees extensively can only play on the smaller 9×9 board, rather than full 19×19 ones. There are several techniques, which can greatly improve the performance of search trees in terms of both speed and memory. Pruning techniques such asalpha–beta pruning,Principal Variation Search, andMTD(f)can reduce the effective branching factor without loss of strength. In tactical areas such as life and death, Go is particularly amenable to caching techniques such astransposition tables. These can reduce the amount of repeated effort, especially when combined with aniterative deepeningapproach. In order to quickly store a full-sized Go board in a transposition table, ahashingtechnique for mathematically summarizing is generally necessary.Zobrist hashingis very popular in Go programs because it has low collision rates, and can be iteratively updated at each move with just twoXORs, rather than being calculated from scratch. Even using these performance-enhancing techniques, full tree searches on a full-sized board are still prohibitively slow. Searches can be sped up by using large amounts of domain specific pruning techniques, such as not considering moves where your opponent is already strong, and selective extensions like always considering moves next to groups of stones which areabout to be captured. However, both of these options introduce a significant risk of not considering a vital move which would have changed the course of the game. Results of computer competitions show that pattern matching techniques for choosing a handful of appropriate moves combined with fast localized tactical searches (explained above) were once sufficient to produce a competitive program. For example,GNU Gowas competitive until 2008. Human novices often learn from the game records of old games played by master players. AI work in the 1990s often involved attempting to "teach" the AI human-style heuristics of Go knowledge. In 1996, Tim Klinger and David Mechner acknowledged the beginner-level strength of the best AIs and argued that "it is our belief that with better tools for representing and maintaining Go knowledge, it will be possible to develop stronger Go programs."[34]They proposed two ways: recognizing common configurations of stones and their positions and concentrating on local battles. In 2001, one paper concluded that "Go programs are still lacking in both quality and quantity of knowledge," and that fixing this would improve Go AI performance.[35] In theory, the use of expert knowledge would improve Go software. Hundreds of guidelines and rules of thumb for strong play have been formulated by both high-level amateurs and professionals. The programmer's task is to take theseheuristics, formalize them into computer code, and utilizepattern matchingandpattern recognitionalgorithms to recognize when these rules apply. It is also important to be able to "score" these heuristics so that when they offer conflicting advice, the system has ways to determine which heuristic is more important and applicable to the situation. Most of the relatively successful results come from programmers' individual skills at Go and their personal conjectures about Go, but not from formal mathematical assertions; they are trying to make the computer mimic the way they play Go. Competitive programs around 2001 could contain 50–100 modules that dealt with different aspects and strategies of the game, such as joseki.[35] Some examples of programs which have relied heavily on expert knowledge are Handtalk (later known as Goemate), The Many Faces of Go, Go Intellect, and Go++, each of which has at some point been considered the world's best Go program. However, these methods ultimately had diminishing returns, and never really advanced past an intermediate level at best on a full-sized board. One particular problem was overall game strategy. Even if an expert system recognizes a pattern and knows how to play a local skirmish, it may miss a looming deeper strategic problem in the future. The result is a program whose strength is less than the sum of its parts; while moves may be good on an individual tactical basis, the program can be tricked and maneuvered into ceding too much in exchange, and find itself in an overall losing position. As the 2001 survey put it, "just one bad move can ruin a good game. Program performance over a full game can be much lower than master level."[35] One major alternative to using hand-coded knowledge and searches is the use ofMonte Carlo methods. This is done by generating a list of potential moves, and for each move playing out thousands of games at random on the resulting board. The move which leads to the best set of random games for the current player is chosen as the best move. No potentially fallible knowledge-based system is required. However, because the moves used for evaluation are generated at random it is possible that a move which would be excellent except for one specific opponent response would be mistakenly evaluated as a good move. The result of this are programs which are strong in an overall strategic sense, but are imperfect tactically.[citation needed]This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution. Some programs which use Monte-Carlo techniques are Fuego,[36]The Many Faces of Go v12,[37]Leela,[38]MoGo,[39]Crazy Stone, MyGoFriend,[40]and Zen. In 2006, a new search technique,upper confidence bounds applied to trees(UCT),[41]was developed and applied to many 9x9 Monte-Carlo Go programs with excellent results. UCT uses the results of theplay outscollected so far to guide the search along the more successful lines of play, while still allowing alternative lines to be explored. The UCT technique along with many other optimizations for playing on the larger 19x19 board has led MoGo to become one of the strongest research programs. Successful early applications of UCT methods to 19x19 Go include MoGo, Crazy Stone, and Mango.[42]MoGo won the 2007Computer Olympiadand won one (out of three) blitz game against Guo Juan, 5th Dan Pro, in the much less complex 9x9 Go. The Many Faces of Go[43]won the 2008Computer Olympiadafter adding UCT search to its traditional knowledge-based engine. Monte-Carlo based Go engines have a reputation of being much more willing to playtenuki, moves elsewhere on the board, rather than continue a local fight than human players. This was often perceived as a weakness early in these program's existence.[44]That said, this tendency has persisted in AlphaGo's playstyle with dominant results, so this may be more of a "quirk" than a "weakness."[45] The skill level of knowledge-based systems is closely linked to the knowledge of their programmers and associated domain experts. This limitation has made it difficult to program truly strong AIs. A different path is to usemachine learningtechniques. In these, the only thing that the programmers need to program are the rules and simple scoring algorithms of how to analyze the worth of a position. The software will then automatically generates its own sense of patterns, heuristics, and strategies, in theory. This is generally done by allowing aneural networkorgenetic algorithmto either review a large database of professional games, or play many games against itself or other people or programs. These algorithms are then able to utilize this data as a means of improving their performance. Machine learning techniques can also be used in a less ambitious context to tune specific parameters of programs that rely mainly on other techniques. For example,Crazy Stonelearns move generation patterns from several hundred sample games, using a generalization of theElo rating system.[46] The most famous example of this approach is AlphaGo, which proved far more effective than previous AIs. In its first version, it had one layer that analyzed millions of existing positions to determine likely moves to prioritize as worthy of further analysis, and another layer that tried to optimize its own winning chances using the suggested likely moves from the first layer. AlphaGo used Monte Carlo tree search to score the resulting positions. A later version of AlphaGo, AlphaGoZero, eschewed learning from existing Go games, and instead learnt only from playing itself repeatedly. Other earlier programs using neural nets include NeuroGo and WinHonte. Computer Go research results are being applied to other similar fields such ascognitive science,pattern recognitionandmachine learning.[47]Combinatorial Game Theory, a branch ofapplied mathematics, is a topic relevant to computer Go.[35] John H. Conwaysuggested applyingsurreal numbersto analysis of the endgame in Go. This idea has been further developed byElwyn R. BerlekampandDavid Wolfein their bookMathematical Go.[48]Go endgames have been proven to bePSPACE-hardif the absolute best move must be calculated on an arbitrary mostly filled board. Certain complicated situations such as Triple Ko, Quadruple Ko, Molasses Ko, and Moonshine Life make this problem difficult.[49](In practice, strong Monte Carlo algorithms can still handle normal Go endgame situations well enough, and the most complicated classes of life-and-death endgame problems are unlikely to come up in a high-level game.)[50] Various difficult combinatorial problems (anyNP-hardproblem) can be converted to Go-like problems on a sufficiently large board; however, the same is true for other abstract board games, includingchessandminesweeper, when suitably generalized to a board of arbitrary size.NP-completeproblems do not tend in their general case to be easier for unaided humans than for suitably programmed computers: unaided humans are much worse than computers at solving, for example, instances of thesubset sum problem.[51][52] Several annual competitions take place between Go computer programs, including Go events at theComputer Olympiad. Regular, less formal, competitions between programs used to occur on the KGS Go Server[60](monthly) and the Computer Go Server[61](continuous). Many programs are available that allow computer Go engines to play against each other; they almost always communicate via the Go Text Protocol (GTP). The first computer Go competition was sponsored byAcornsoft,[62]and the first regular ones byUSENIX. They ran from 1984 to 1988. These competitions introduced Nemesis, the first competitive Go program fromBruce Wilcox, and G2.5 by David Fotland, which would later evolve into Cosmos and The Many Faces of Go. One of the early drivers of computer Go research was the Ing Prize, a relatively large money award sponsored by Taiwanese bankerIng Chang-ki, offered annually between 1985 and 2000 at the World Computer Go Congress (or Ing Cup). The winner of this tournament was allowed to challenge young players at a handicap in a short match. If the computer won the match, the prize was awarded and a new prize announced: a larger prize for beating the players at a lesser handicap. The series of Ing prizes was set to expire either 1) in the year 2000 or 2) when a program could beat a 1-dan professional at no handicap for 40,000,000NT dollars. The last winner was Handtalk in 1997, claiming 250,000 NT dollars for winning an 11-stone handicap match against three 11–13 year old amateur 2–6 dans. At the time the prize expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a nine-stone handicap match.[63] Many other large regional Go tournaments ("congresses") had an attached computer Go event. The European Go Congress has sponsored a computer tournament since 1987, and the USENIX event evolved into the US/North American Computer Go Championship, held annually from 1988 to 2000 at the US Go Congress. Japan started sponsoring computer Go competitions in 1995. The FOST Cup was held annually from 1995 to 1999 in Tokyo. That tournament was supplanted by the Gifu Challenge, which was held annually from 2003 to 2006 in Ogaki, Gifu. TheComputer Go UEC Cuphas been held annually since 2007. When two computers play a game of Go against each other, the ideal is to treat the game in a manner identical to two humans playing while avoiding any intervention from actual humans. However, this can be difficult during end game scoring. The main problem is that Go playing software, which usually communicates using the standardizedGo Text Protocol(GTP), will not always agree with respect to the alive or dead status of stones. While there is no general way for two different programs to "talk it out" and resolve the conflict, this problem is avoided for the most part by usingChinese,Tromp-Taylor, orAmerican Go Association(AGA) rules in which continued play (without penalty) is required until there is no more disagreement on the status of any stones on the board. In practice, such as on the KGS Go Server, the server can mediate a dispute by sending a special GTP command to the two client programs indicating they should continue placing stones until there is no question about the status of any particular group (all dead stones have been captured). The CGOS Go Server usually sees programs resign before a game has even reached the scoring phase, but nevertheless supports a modified version of Tromp-Taylor rules requiring a full play out. These rule sets mean that a program which was in a winning position at the end of the game under Japanese rules (when both players have passed) could theoretically lose because of poor play in the resolution phase, but this is very unlikely and considered a normal part of the game under all of the area rule sets. The main drawback to the above system is that somerule sets(such as the traditional Japanese rules) penalize the players for making these extra moves, precluding the use of additional playout for two computers. Nevertheless, most modern Go Programs support Japanese rules against humans. Historically, another method for resolving this problem was to have an expert human judge the final board. However, this introduces subjectivity into the results and the risk that the expert would miss something the program saw.
https://en.wikipedia.org/wiki/Computer_Go
TheCapability Maturity Model(CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with theU.S. Department of Defense, who funded the research. The term "maturity" relates to the degree of formality andoptimizationof processes, fromad hocpractices, to formally defined steps, to managed result metrics, to active optimization of the processes. The model's aim is to improve existingsoftware developmentprocesses, but it can also be applied to other processes. In 2006, the Software Engineering Institute atCarnegie Mellon Universitydeveloped theCapability Maturity Model Integration, which has largely superseded the CMM and addresses some of its drawbacks.[1] The Capability Maturity Model was originally developed as a tool for objectively assessing the ability of government contractors'processesto implement a contracted software project. The model is based on the process maturity framework first described inIEEE Software[2]and, later, in the 1989 bookManaging the Software ProcessbyWatts Humphrey. It was later published as an article in 1993[3]and as a book by the same authors in 1994.[4] Though the model comes from the field ofsoftware development, it is also used as a model to aid in business processes generally, and has also been used extensively worldwide in government offices, commerce, and industry.[5][6] In the 1980s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand forsoftware developmentgrew significantly. Many processes for software development were in their infancy, with few standard or "best practice" approaches defined. As a result, the growth was accompanied by growing pains: project failure was common, the field ofcomputer sciencewas still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such asEdward Yourdon,[7]Larry Constantine,Gerald Weinberg,[8]Tom DeMarco,[9]andDavid Parnasbegan to publish articles and books with research results in an attempt to professionalize the software-development processes.[5][10] In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, theUnited States Air Forcefunded a study at the Software Engineering Institute (SEI). The first application of a staged maturity model to IT was not by CMU/SEI, but rather byRichard L. Nolan, who, in 1973 published thestages of growth modelfor IT organizations.[11] Watts Humphreybegan developing his process maturity concepts during the later stages of his 27-year career at IBM.[12] Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined theSoftware Engineering Institutelocated at Carnegie Mellon University inPittsburgh, Pennsylvaniaafter retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts. The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlierQuality Management Maturity Griddeveloped byPhilip B. Crosbyin his book "Quality is Free".[13]Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMMI has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance. Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[14]and as a book in 1989, inManaging the Software Process.[15] Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute. The full representation of the Capability Maturity Model as a set of definedprocess areasand practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being published in July 1993.[3]The CMM was published as a book[4]in 1994 by the same authors Mark C. Paulk, Charles V. Weber,Bill Curtis, and Mary Beth Chrissis. The CMMI model's application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. TheCapability Maturity Model Integration(CMMI) project was formed to sort out the problem of using multiple models for software development processes, thus the CMMI model has superseded the CMM model, though the CMM model continues to be a general theoretical process capability model used in the public domain.[16][citation needed][17] In 2016, the responsibility for CMMI was transferred to the Information Systems Audit and Control Association (ISACA). ISACA subsequently released CMMI v2.0 in 2021. It was upgraded again to CMMI v3.0 in 2023. CMMI now places a greater emphasis on the process architecture which is typically realized as a process diagram. Copies of CMMI are available now only by subscription. The CMMI was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity ofprocess(e.g.,IT service managementprocesses) in IS/IT (and other) organizations. Amaturity modelcan be viewed as a set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce required outcomes. A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes. The model involves five aspects: There are five levels defined along the continuum of the model and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief".[18] Within each of these maturity levels are Key Process Areas which characterise that level, and for each such area there are five factors: goals, commitment, ability, measurement, and verification. These are not necessarily unique to CMMI, representing — as they do — the stages that organizations must go through on the way to becoming mature. The model provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible. Between 2008 and 2019, about 12% of appraisals given were at maturity levels 4 and 5.[19][20] The model was originally intended to evaluate the ability of government contractors to perform a software project. It has been used for and may be suited to that purpose, but critics[who?]pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development. The software process framework documented is intended to guide those wishing to assess an organization's or project's consistency with the Key Process Areas. For each maturity level there are five checklist types:
https://en.wikipedia.org/wiki/Capability_Maturity_Model
Roll-on/roll-off(ROROorro-ro)shipsarecargo shipsdesigned to carry wheeledcargo, such ascars,motorcycles,trucks,semi-trailer trucks,buses,trailers, andrailroad cars, that are driven on and off the ship on their own wheels or using a platform vehicle, such as aself-propelled modular transporter. This is in contrast tolift-on/lift-off(LoLo) vessels, which use acraneto load and unload cargo. RORO vessels have either built-in or shore-basedrampsorferry slipsthat allow the cargo to be efficiently rolled on and off the vessel when in port. While smaller ferries that operate acrossriversand other short distances often have built-in ramps, the term RORO is generally reserved for large seagoing vessels. The ramps and doors may be located in thestern,bow, or sides, or any combination thereof. Types of RORO vessels includeferries,cruiseferries,cargo ships,barges, and RoRo service for air/ railway deliveries. New automobiles that are transported by ship are often moved on a large type of RORO called a pure car carrier (PCC) or pure car/truck carrier (PCTC). Elsewhere in the shipping industry, cargo is normally measured bytonnageor by thetonne, but RORO cargo is typically measured inlanes in metres(LIMs). This is calculated by multiplying the cargo length in metres by the number of decks and by its width in lanes (lane width differs from vessel to vessel, and there are several industry standards). On PCCs, cargo capacity is often measured in RT or RT43 units (based on a1966 Toyota Corona, the first mass-produced car to be shipped in specialised car-carriers and used as the basis of RORO vessel size. 1 RT is approximately 4m of lane space required to store a 1.5m wide Toyota Corona) or in car-equivalent units (CEU). The largest RORO passenger ferry isMSColor Magic, a 75,100GTcruise ferry that entered service in September 2007 forColor Line. Built inFinlandbyAker Finnyards, it is 223.70 m (733 ft 11 in) long and 35 m (114 ft 10 in) wide, and can carry 550 cars, or 1,270 lane meters of cargo.[1] The RORO passenger ferry with the greatest car-carrying capacity isUlysses(named aftera novel byJames Joyce), owned byIrish Ferries.Ulyssesentered service on 25 March 2001 and operates betweenDublinandHolyhead. The 50,938 GT ship is 209.02 m (685 ft 9 in) long and 31.84 m (104 ft 6 in) wide, and can carry 1,342 cars/4,101 lane meters of cargo.[2] The first cargo ships specially fitted for the transport of large quantities of cars came into service in the early 1960s. These ships still had their own loading gear and so-called hanging decks inside. They were, for example, chartered by the GermanVolkswagen AGto transport vehicles to the U.S. and Canada. During the 1970s, the market for exporting and importing cars increased dramatically and correspondingly also did the number and type of ROROs . In 1970 Japan'sK Linebuilt theToyota Maru No. 10, Japan's first pure car carrier, and in 1973 built theEuropean Highway, the largest pure car carrier (PCC) at that time, which carried 4,200 automobiles. Today's pure car carriers and their close cousins, the pure car/truck carrier (PCTC), are distinctive ships with a box-like superstructure running the entire length and breadth of the hull, fully enclosing the cargo. They typically have a stern ramp and a side ramp for dual loading of thousands of vehicles (such as cars, trucks, heavy machineries, tracked units,Mafi roll trailers, and loose statics), and extensive automatic fire control systems. The PCTC has liftable decks to increase vertical clearance, as well as heavier decks for "high-and-heavy" cargo. A 6,500-unit car ship, with 12 decks, can have three decks which can take cargo up to 150short tons(136t; 134long tons) with liftable panels to increase clearance from 1.7 to 6.7 m (5 ft 7 in to 22 ft 0 in) on some decks. Lifting decks to accommodate higher cargo reduces the total capacity. These vessels can achieve a cruising speed of 16 knots (18 mph; 30 km/h) at eco-speed, while at full speed can achieve more than 19 knots (22 mph; 35 km/h). As of 7 August 2024[update], the largest LCTC was theHöegh Aurora, the inaugural vessel of a planned class of twelve, each with a capacity of 9,100 CEU.[3]Meanwhile, theMarine Design & Research Institute of China(MARIC) is developing a new vessel class with a capacity of 12,800 CEU. The design has received Approval in Principle (AiP) fromLloyd's Register, which was granted in June 2024.[4] The car carrierAuriga Leader, belonging toNippon YusenKaisha, built in 2008 with a capacity of 6,200 cars, is the world's first partially solar powered ship.[5] The seagoing RORO car ferry, with large external doors close to the waterline and open vehicle decks with few internalbulkheads, has a reputation for being a high-risk design, to the point where the acronym is sometimes derisively expanded to "roll on/roll over".[6]An improperly secured loading door can cause a ship to take on water and sink, as happened in 1987 withMSHerald of Free Enterprise. Water sloshing on the vehicle deck can set up afree surface effect, making the ship unstable and causing it tocapsize. Free surface water on the vehicle deck was determined by the court of inquiry to be the immediate cause of the 1968 capsize of theTEVWahinein New Zealand.[7]It also contributed to the wreck ofMSEstonia. Despite these inherent risks, the very highfreeboardraises the seaworthiness of these vessels. For example, the car carrierMVCougar Acelisted 60 degrees to its port side in 2006, but did not sink, since its high enclosed sides prevented water from entering. In late January 2016MVModern Expresswas listing offFranceafter cargo shifted on the ship. Salvage crews secured the vessel and it was hauled into the port of Bilbao, Spain.[8] At first, wheeled vehicles carried as cargo on oceangoing ships were treated like any other cargo. Automobiles had their fuel tanks emptied and their batteries disconnected before being hoisted into the ship's hold, where they were chocked and secured. This process was tedious and difficult, and vehicles were subject to damage and could not be used for routine travel. An early roll-on/roll-off service was atrain ferry, started in 1833 by theMonkland and Kirkintilloch Railway, which operated a wagon ferry on theForth and Clyde CanalinScotland.[9][page needed] The first modern train ferry wasLeviathan, built in 1849. TheEdinburgh, Leith and Newhaven Railwaywas formed in 1842 and the company wished to extend theEast Coast Main Linefurther north toDundeeandAberdeen. As bridge technology was not yet capable enough to provide adequate support for the crossing over theFirth of Forth, which was roughly five miles across, a different solution had to be found, primarily for the transport of goods, where efficiency was key. The company hired the up-and-coming civil engineerThomas Bouchwho argued for a train ferry with a roll-on/roll-off mechanism to maximise the efficiency of the system. Ferries were to be custom-built, with railway lines and matching harbour facilities at both ends to allow the rolling stock to easily drive on and off.[10]To compensate for the changingtides, adjustable ramps were positioned at the harbours and the gantry structure height was varied by moving it along the slipway. The wagons were loaded on and off with the use ofstationary steam engines.[10][9][page needed] Although others had had similar ideas, Bouch was the first to put them into effect, and did so with an attention to detail (such as design of theferry slip) which led a subsequent President of theInstitution of Civil Engineers[11]to settle any dispute over priority of invention with the observation that "there was little merit in a simple conception of this kind, compared with a work practically carried out in all its details, and brought to perfection."[12] The company was persuaded to install this train ferry service for the transportation of goods wagons across theFirth of ForthfromBurntislandinFifetoGranton.[13]The ferry itself was built byThomas Grainger, a partner of the firm Grainger and Miller. The service commenced on 3 February 1850.[14]It was called "The Floating Railway"[15]and intended as a temporary measure until the railway could build a bridge, but this wasnot opened until 1890, its construction delayed in part by repercussions from the catastrophic failure of Thomas Bouch'sTay Rail Bridge.[13] Train-ferry services were used extensively duringWorld War I. From 10 February 1918, high volumes of railway rolling stock, artillery and supplies for the Front were shipped to France from the "secret port" ofRichborough, near Sandwich on the South Coast of England. This involved three train-ferries to be built, each with four sets of railway line on the main deck to allow for up to 54 railway wagons to be shunted directly on and off the ferry. These train-ferries could also be used to transport motor vehicles along with railway rolling stock. Later that month a second train-ferry was established from thePort of Southamptonon the South East Coast. In the first month of operations at Richborough, 5,000 tons were transported across the Channel, by the end of 1918 it was nearly 261,000 tons.[16] There were many advantages of the use of train-ferries over conventional shipping in World War I. It was much easier to move the large, heavy artillery and tanks that this kind of modern warfare required using train-ferries as opposed to repeated loading and unloading of cargo. By manufacturers loading tanks, guns and other heavy items for shipping to the front directly on to railway wagons, which could be shunted on to a train-ferry in England and then shunted directly on to the French Railway Network, with direct connections to the Front Lines, many man hours of unnecessary labour were avoided. An analysis done at the time found that to transport 1,000 tons of war material from the point of manufacture to the front by conventional means involved the use of 1,500 labourers, whereas when using train-ferries that number decreased to around 100 labourers. This was of utmost importance, as by 1918, theBritish Railway companieswere experiencing a severe shortage of labour with hundreds of thousands of skilled and unskilled labourers away fighting at the front. The increase of heavy traffic because of the war effort meant that economies and efficiency in transport had to be made wherever possible.[16] After the signing of the Armistice on 11 November 1918, train ferries were used extensively for the return of material from the Front. Indeed, according to war office statistics, a greater tonnage of material was transported by train ferry from Richborough in 1919 than in 1918. As the train ferries had space for motor transport as well as railway rolling stock, thousands of lorries, motor cars and "B Type" buses used these ferries to return to England. DuringWorld War II,landing ships(LST,"Landing Ship, Tank") were the first purpose-built seagoing ships enabling road vehicles to roll directly on and off. The Britishevacuation from Dunkirkin 1940 demonstrated to theAdmiraltythat the Allies needed relatively large, seagoing ships capable of shore-to-shore delivery oftanksand other vehicles inamphibious assaultsupon the continent of Europe. As an interim measure, three 4000 to 4800 GRT tankers, built to pass over the restrictive bars ofLake Maracaibo,Venezuela, were selected for conversion because of their shallow draft. Bow doors and ramps were added to these ships, which became the first tank landing ships.[17] The first purpose-built LST design wasHMSBoxer. It was a scaled down design from ideas penned by Churchill. To carry 13Churchillinfantry tanks, 27 vehicles and nearly 200 men (in addition to the crew) at a speed of 18 knots, it could not have the shallow draught that would have made for easy unloading. As a result, each of the three (Boxer,Bruiser, andThruster) ordered in March 1941 had a very long ramp stowed behind the bow doors.[18] In November 1941, a small delegation from the British Admiralty arrived in the United States to pool ideas with theUnited States Navy'sBureau of Shipswith regard to development of ships and also including the possibility of building furtherBoxers in the US.[18]During this meeting, it was decided that the Bureau of Ships would design these vessels. As with the standing agreement these would be built by the US so British shipyards could concentrate on building vessels for theRoyal Navy. The specification called for vessels capable of crossing the Atlantic and the original title given to them was "Atlantic Tank Landing Craft" (Atlantic (T.L.C.)). Calling a vessel 300 ft (91 m) long a "craft" was considered a misnomer and the type was re-christened "Landing Ship, Tank (2)", or "LST (2)". The LST(2) design incorporated elements of the first British LCTs from their designer, Sir Rowland Baker, who was part of the British delegation. This included sufficient buoyancy in the ships' sidewalls that they would float even with the tank deck flooded.[18]The LST(2) gave up the speed of HMSBoxerat only 10 knots (19 km/h; 12 mph) but had a similar load while drawing only 3 ft (0.91 m) forward when beaching. In three separate acts dated 6 February 1942, 26 May 1943, and 17 December 1943, Congress provided the authority for the construction of LSTs along with a host of other auxiliaries,destroyer escorts, and assortedlanding craft. The enormous building program quickly gathered momentum. Such a high priority was assigned to the construction of LSTs that the previously laid keel of anaircraft carrierwas hastily removed to make room for several LSTs to be built in her place. The keel of the first LST was laid down on 10 June 1942 atNewport News, Virginia, and the first standardized LSTs were floated out of their building dock in October. Twenty-three were in commission by the end of 1942. At the end of the first world war vehicles were brought back from France toRichborough Port[19]drive-on-drive-off using the train ferry. During the war British servicemen recognised the great potential of landing ships and craft. The idea was simple; if you could drive tanks, guns and lorries directly onto a ship and then drive them off at the other end directly onto a beach, then theoretically you could use the same landing craft to carry out the same operation in the civilian commercial market, providing there were reasonable port facilities. From this idea grew the worldwide roll-on/roll-offferryindustry of today. In the period between the wars Lt. ColonelFrank Bustardformed theAtlantic Steam Navigation Company, with a view to cheap transatlantic travel; this never materialised, but during the war he observed trials onBrighton Sandsof an LST in 1943 when its peacetime capabilities were obvious. In the spring of 1946 the company approached the Admiralty with a request to purchase three of these vessels. The Admiralty were unwilling to sell, but after negotiations agreed to let the ASN have the use of three vessels onbareboat charterat a rate of £13 6s 8d per day. These vessels were LSTs3519,3534, and3512. They were renamedEmpire Baltic,Empire Cedric, andEmpire Celtic, perpetuating the name ofWhite Star Lineships in combination with the"Empire" shipnaming of vessels in government service during the war. On the morning of 11 September 1946 the first voyage of the Atlantic Steam Navigation Company took place whenEmpire Balticsailed fromTilburytoRotterdamwith a full load of 64 vehicles for the Dutch Government. The original three LSTs were joined in 1948 by another vessel,LST 3041, renamedEmpire Doric, after the ASN were able to convince commercial operators to support the new route betweenPrestonand the Northern Ireland port ofLarne. The first sailing of this new route was on 21 May 1948 byEmpire Cedric. After the inaugural sailingEmpire Cedriccontinued on the Northern Ireland service, offering initially a twice-weekly service.Empire Cedricwas the first vessel of the ASN fleet to hold a passenger certificate, and was allowed to carry fifty passengers. ThusEmpire Cedricbecame the first vessel in the world to operate as a commercial/passenger roll-on/roll-off ferry, and the ASN became the first commercial company to offer this type of service. The first RORO service crossing theEnglish Channelbegan fromDoverin 1953.[20]In 1954, theBritish Transport Commission(BTC) took over the ASN under the Labour Governmentsnationalizationpolicy. In 1955 another two LSTs where chartered into the existing fleet,Empire CymricandEmpire Nordic, bringing the fleet strength to seven. The Hamburg service was terminated in 1955, and a new service was opened between Antwerp and Tilbury. The fleet of seven ships was to be split up with the usual three ships based at Tilbury and the others maintaining the Preston to Northern Ireland service. During late 1956, the entire fleet of ASN were taken over for use in the Mediterranean during theSuez Crisis, and the drive-on/drive-off services were not re-established until January 1957. At this point ASN were made responsible for the management of twelve Admiralty LST(3)s brought out of reserve as a result of theSuez Crisistoo late to see service. The first roll-on/roll-off vessel that was purpose-built to transport loaded semi trucks wasSearoad of Hyannis, which began operation in 1956. While modest in capacity, it could transport three semi trailers between Hyannis in Massachusetts and Nantucket Island, even in ice conditions.[21] In 1957, the US military issued a contract to theSun Shipbuilding and Dry Dock CompanyinChester, Pennsylvania, for the construction of a new type of motorized vehicle carrier. The ship,USNSComet, had a stern ramp as well as interior ramps, which allowed cars to drive directly from the dock, onto the ship, and into place. Loading and unloading was sped up dramatically.Cometalso had an adjustable chocking system for locking cars onto the decks and a ventilation system to remove exhaust gases that accumulate during vehicle loading. During the 1982Falklands War,SSAtlantic Conveyorwas requisitioned as an emergency aircraft and helicopter transport for BritishHawker Siddeley HarrierSTOVLfighter planes; one Harrier was kept fueled, armed, and ready to VTOL launch for emergency air protection against long range Argentine aircraft.Atlantic Conveyorwas sunk by ArgentineExocetmissiles after offloading the Harriers to proper aircraft carriers, but the vehicles and helicopters still aboard were lost.[22] After the war, a concept called the shipborne containerized air-defense system (SCADS) proposed a modular system to quickly convert a large RORO into an emergency aircraft carrier with ski jump, fueling systems, radar, defensive missiles, munitions, crew quarters, and work spaces. The entire system could be installed in about 48 hours on a container ship or RORO, when needed for operations up to a month unsupplied. The system could quickly be removed and stored again when the conflict was over.[23]The Soviets flyingYakovlev Yak-38fighters also tested operations using the civilian RORO shipsAgostinio NetoandNikolai Cherkasov.[24]
https://en.wikipedia.org/wiki/Roll-on/roll-off
In computers, aserial decimalnumeric representation is one in which tenbitsare reserved for each digit, with a different bit turned on depending on which of the ten possible digits is intended.ENIACandCALDICused this representation.[1] Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Serial_decimal
Inmathematics, aminimal counterexampleis the smallest example which falsifies a claim, and aproof by minimal counterexampleis a method ofproofwhich combines the use of a minimal counterexample with the methods ofproof by inductionandproof by contradiction.[1][2]More specifically, in trying to prove a propositionP, one first assumes by contradiction that it is false, and that therefore there must be at least onecounterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexampleCthat isminimal. In regard to the argument,Cis generally something quite hypothetical (since the truth ofPexcludes the possibility ofC), but it may be possible to argue that ifCexisted, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the propositionPis indeed true.[3] If the form of the contradiction is that we can derive a further counterexampleD, that is smaller thanCin the sense of the working hypothesis of minimality, then this technique is traditionally calledproof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof. The assumption that if there is a counterexample, there is a minimal counterexample, is based on awell-orderingof some kind. The usual ordering on thenatural numbersis clearly possible, by the most usual formulation ofmathematical induction; but the scope of the method can includewell-ordered inductionof any kind. The minimal counterexample method has been much used in theclassification of finite simple groups. TheFeit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was proved based on the hypothesis of some, and therefore some minimal, simple groupGof odd order. Every proper subgroup ofGcan be assumed a solvable group, meaning that much theory of such subgroups could be applied.[4] Euclid's proof of the fundamental theorem of arithmeticis a simple proof which uses a minimal counterexample.[5][6] Courant and Robbins used the termminimal criminalfor a minimal counterexample in the context of thefour color theorem.[7]
https://en.wikipedia.org/wiki/Minimal_counterexample
Acalculationis a deliberatemathematicalprocess that transforms a plurality of inputs into a singular or plurality of outputs, known also as a result or results. The term is used in a variety of senses, from the very definitearithmeticalcalculation of using analgorithm, to the vagueheuristicsof calculating a strategy in a competition, or calculating the chance of a successful relationship between two people. For example,multiplying7 by 6 is a simple algorithmic calculation. Extracting thesquare rootor thecube rootof a number using mathematical models is a more complex algorithmic calculation. Statistical estimationsof the likely election results from opinion polls also involve algorithmic calculations, but produces ranges of possibilities rather than exact answers. Tocalculatemeans to determine mathematically in the case of a number or amount, or in the case of an abstract problem to deduce the answer usinglogic,reasonorcommon sense.[1]The English word derives from theLatincalculus, which originally meant apebble(from Latincalx), for instance the small stones used as a counters on anabacus(Latin:abacus,Greek:ἄβαξ,romanized:abax). The abacus was an instrument used by Greeks and Romans for arithmetic calculations, preceding theslide-ruleand theelectronic calculator, and consisted of perforated pebbles sliding on iron bars.
https://en.wikipedia.org/wiki/Calculation
Construct validityconcerns how well a set ofindicators represent or reflect a concept that is not directly measurable.[1][2][3]Construct validationis the accumulation of evidence to support the interpretation of what a measure reflects.[1][4][5][6]Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence[7][8]such ascontent validityandcriterion validity.[9][10] Construct validity is the appropriateness of inferences made on the basis of observations or measurements (often test scores), specifically whether a test can reasonably be considered to reflect the intendedconstruct. Constructs are abstractions that are deliberately created by researchers in order to conceptualize thelatent variable, which is correlated with scores on a given measure (although it is not directly observable). Construct validity examines the question: Does the measure behave like the theory says a measure of that construct should behave? Construct validity is essential to the perceived overall validity of the test. Construct validity is particularly important in thesocial sciences,psychology,psychometricsand language studies. Psychologists such asSamuel Messick(1998) have pushed for a unified view of construct validity "...as an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores..."[11]While Messick's views are popularized in educational measurement and originated in a career around explaining validity in the context of the testing industry, a definition more in line with foundational psychological research, supported by data-driven empirical studies that emphasize statistical and causal reasoning was given by (Borsboom et al., 2004).[12] Key to construct validity are the theoretical ideas behind the trait under consideration, i.e. the concepts that organize how aspects ofpersonality,intelligence, etc. are viewed.[13]Paul Meehlstates that, "The best construct is the one around which we can build the greatest number of inferences, in the most direct fashion."[1] Scale purification, i.e. "the process of eliminating items from multi-item scales" (Wieland et al., 2017) can influence construct validity. A framework presented by Wieland et al. (2017) highlights that both statistical and judgmental criteria need to be taken under consideration when making scale purification decisions.[14] Throughout the 1940s scientists had been trying to come up with ways to validate experiments prior to publishing them. The result of this was a plethora of different validities (intrinsic validity,face validity,logical validity,empirical validity, etc.). This made it difficult to tell which ones were actually the same and which ones were not useful at all. Until the middle of the 1950s, there were very few universally accepted methods to validate psychological experiments. The main reason for this was because no one had figured out exactly which qualities of the experiments should be looked at before publishing. Between 1950 and 1954 the APA Committee on Psychological Tests met and discussed the issues surrounding the validation of psychological experiments.[1] Around this time the term construct validity was first coined byPaul MeehlandLee Cronbachin their seminal article "Construct Validity In Psychological Tests". They noted the idea that construct validity was not new at that point; rather, it was a combination of many different types of validity dealing with theoretical concepts. They proposed the following three steps to evaluate construct validity: Many psychologists noted that an important role of construct validation inpsychometricswas that it placed more emphasis on theory as opposed to validation. This emphasis was designed to address a core requirement that validation include some demonstration that the test measures the theoretical construct it purported to measure. Construct validity has three aspects or components: the substantive component, structural component, and external component.[15]They are closely related to three stages in the test construction process: constitution of the pool of items, analysis and selection of the internal structure of the pool of items, and correlation of test scores with criteria and other variables. In the 1970s there was growing debate between theorists who began to see construct validity as the dominant model pushing towards a more unified theory of validity, and those who continued to work from multiple validity frameworks.[16]Many psychologists and education researchers saw "predictive, concurrent, and content validities as essentiallyad hoc, construct validity was the whole of validity from a scientific point of view"[15]In the 1974 version ofTheStandards for Educational and Psychological Testingthe inter-relatedness of the three different aspects of validity was recognized: "These aspects of validity can be discussed independently, but only for convenience. They are interrelated operationally and logically; only rarely is one of them alone important in a particular situation". In 1989 Messick presented a new conceptualization of construct validity as a unified and multi-faceted concept.[17]Under this framework, all forms of validity are connected to and are dependent on the quality of the construct. He noted that a unified theory was not his own idea, but rather the culmination of debate and discussion within the scientific community over the preceding decades. There are six aspects of construct validity in Messick's unified theory of construct validity:[18] How construct validity should properly be viewed is still a subject of debate for validity theorists. The core of the difference lies in anepistemologicaldifference betweenpositivistandpostpositivisttheorists. Evaluation of construct validity requires that the correlations of the measure be examined in regard to variables that are known to be related to the construct (purportedly measured by the instrument being evaluated or for which there are theoretical grounds for expecting it to be related). This is consistent with themultitrait-multimethod matrix(MTMM) of examining construct validity described in Campbell and Fiske's landmark paper (1959).[19]There are other methods to evaluate construct validity besides MTMM. It can be evaluated through different forms offactor analysis,structural equation modeling(SEM), and other statistical evaluations.[20][21]It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development. Correlations that fit the expected pattern contribute evidence of construct validity. Construct validity is a judgment based on the accumulation of correlations from numerous studies using the instrument being evaluated.[22] Most researchers attempt to test the construct validity before the main research. To do thispilot studiesmay be utilized. Pilot studies are small scale preliminary studies aimed at testing the feasibility of a full-scale test. These pilot studies establish the strength of their research and allow them to make any necessary adjustments. Another method is the known-groups technique, which involves administering the measurement instrument to groups expected to differ due to known characteristics. Hypothesized relationship testing involves logical analysis based on theory or prior research.[6]Intervention studiesare yet another method of evaluating construct validity. Intervention studies where a group with low scores in the construct is tested, taught the construct, and then re-measured can demonstrate a test's construct validity. If there is a significant difference pre-test and post-test, which are analyzed by statistical tests, then this may demonstrate good construct validity.[23] Convergent and discriminant validity are the two subtypes of validity that make up construct validity. Convergent validity refers to the degree to which two measures of constructs that theoretically should be related, are in fact related. In contrast, discriminant validity tests whether concepts or measurements that are supposed to be unrelated are, in fact, unrelated.[19]Take, for example, a construct of general happiness. If a measure of general happiness had convergent validity, then constructs similar to happiness (satisfaction, contentment, cheerfulness, etc.) should relate positively to the measure of general happiness. If this measure has discriminant validity, then constructs that are not supposed to be related positively to general happiness (sadness, depression, despair, etc.) should not relate to the measure of general happiness. Measures can have one of the subtypes of construct validity and not the other. Using the example of general happiness, a researcher could create an inventory where there is a very high positive correlation between general happiness and contentment, but if there is also a significant positive correlation between happiness and depression, then the measure's construct validity is called into question. The test has convergent validity but not discriminant validity. Lee Cronbach and Paul Meehl (1955)[1]proposed that the development of a nomological net was essential to the measurement of a test's construct validity. Anomological networkdefines a construct by illustrating its relation to other constructs and behaviors. It is a representation of the concepts (constructs) of interest in a study, their observable manifestations, and the interrelationship among them. It examines whether the relationships between similar construct are considered with relationships between the observed measures of the constructs. A thorough observation of constructs relationships to each other it can generate new constructs. For example,intelligenceandworking memoryare considered highly related constructs. Through the observation of their underlying components psychologists developed new theoretical constructs such as: controlled attention[24]and short term loading.[25]Creating a nomological net can also make the observation and measurement of existing constructs more efficient by pinpointing errors.[1]Researchers have found that studying the bumps on the human skull (phrenology) are not indicators of intelligence, but volume of the brain is. Removing the theory of phrenology from the nomological net of intelligence and adding the theory of brain mass evolution, constructs of intelligence are made more efficient and more powerful. The weaving of all of these interrelated concepts and their observable traits creates a "net" that supports their theoretical concept. For example, in the nomological network for academic achievement, we would expect observable traits of academic achievement (i.e. GPA, SAT, and ACT scores) to relate to the observable traits for studiousness (hours spent studying, attentiveness in class, detail of notes). If they do not then there is a problem with measurement (ofacademic achievementor studiousness), or with the purported theory of achievement. If they are indicators of one another then the nomological network, and therefore the constructed theory, of academic achievement is strengthened. Although the nomological network proposed a theory of how to strengthen constructs, it doesn't tell us how we can assess the construct validity in a study. Themultitrait-multimethod matrix(MTMM) is an approach to examining construct validity developed by Campbell and Fiske (1959).[19]This model examines convergence (evidence that different measurement methods of a construct give similar results) and discriminability (ability to differentiate the construct from other related constructs). It measures six traits: the evaluation of convergent validity, the evaluation of discriminant (divergent) validity, trait-method units, multitrait-multimethods, truly different methodologies, and trait characteristics. This design allows investigators to test for: "convergence across different measures...of the same 'thing'...and for divergence between measures...of related but conceptually distinct 'things'.[2][26] Apparent construct validity can be misleading due to a range of problems in hypothesis formulation and experimental design. An in-depth exploration of the threats to construct validity is presented in Trochim.[31]
https://en.wikipedia.org/wiki/Construct_validity
Microcredit for water supply and sanitationis the application ofmicrocreditto provide loans to small enterprises and households in order to increase access to animproved water sourceandsanitationindeveloping countries. For background, most investments inwater supplyandsanitationinfrastructure are financed by thepublic sector, but investment levels have been insufficient to achieve universal access. Commercial credit to public utilities was limited by low tariffs and insufficient cost-recovery. Microcredits are a complementary or alternative approach to allow the poor to gain access to water supply and sanitation in the aforementioned regions.[1][2] Funding is allocated either to small-scale independent water-providers who generate an income stream from selling water, or to households in order to finance house connections, plumbing installations, or on-site sanitation such aslatrines. Many microfinance institutions have only limited experience with financing investments in water supply and sanitation.[3]While there have been manypilot projectsin both urban and rural areas, only a small number of these have been expanded.[4][5]A water connection can significantly lower a family's water expenditures, if it previously had to rely on water vendors, allowing cost-savings to repay the credit. The time previously required to physically fetch water can be put to income-generating purposes, and investments in sanitation provide health benefits that can also translate into increased income.[6] There are three broad types of microcredit products in the water sector:[3] Microcredits can be targeted specifically at water and sanitation, or general-purpose microcredits may be used for this purpose. Such use is typically to finance household water andsewerageconnections,bathrooms,toilets,pit latrines,rainwater harvestingtanks orwater purifiers. The loans are generallyUS$30–250with a tenure of less than three years. Microfinanceinstitutions, such asGrameen Bank, the Vietnam Bank for Social Policies, and numerous microfinance institutions in India and Kenya, offer credits to individuals for water and sanitation facilities. Non-government organisations (NGOs) that are not microfinance institutions, such as Dustha Shasthya Kendra (DSK) in Bangladesh or Community Integrated Development Initiatives in Uganda, also provide credits for water supply and sanitation. The potential market size is considered huge in both rural and urban areas and some of these water and sanitation schemes have achieved a significant scale. Nevertheless, compared to the microfinance institution's overall size, they still play a minor role.[3] In 1999, all microfinance institutions inBangladeshand more recently inVietnamhad reached only about 9 percent and 2.4 percent of rural households respectively.[citation needed][needs update]In either country, water and sanitation amounts to less than two percent of the microfinance institution's total portfolio.[citation needed]However, borrowers for water supply and sanitation comprised 30 percent of total borrowers for Grameen Bank and 10 percent of total borrowers from Vietnam Bank for Social Policies.[citation needed]For instance, the water and sanitation portfolio of the Indian microfinance institutionSEWABank comprised 15 percent of all loans provided in the city of Admedabad over a period of five years.[citation needed] The US-based NGO Water.org, through its WaterCredit initiative, had since 2003 supported microfinance institutions and NGOs in India, Bangladesh, Kenya and Uganda in providing microcredit for water supply and sanitation. As of 2011, it had helped its 13 partner organisations to make 51,000 credits.[needs update]The organisation claimed a 97% repayment rate and stated that 90% of its borrowers were women.[7]WaterCredit did not subsidise interest rates and typically did not make microcredits directly. Instead, it connected microfinance institutions with water and sanitation NGOs to develop water and sanitation microcredits, including through market assessments and capacity-building. Only in exceptional cases did it provide guarantees, standing letters of credit or the initial capital to establish a revolving fund managed by an NGO that was not previously engaged in microcredit.[6] Since 2003Bank Rakyat Indonesiafinanced water connections with the water utility PDAM through microcredits with support from the USAID Environmental Services Program. According to an impact assessment conducted in 2005, the program helped the utility to increase its customer base by 40% which reduced its costs per cubic meter of water sold by 42% and reduced itsnon-revenue waterfrom 56.5% in 2002 to 36% percent at the end of 2004.[8] In 1999, theWorld Bankin cooperation with the governments of Australia, Finland and Denmark supported the creation of a Sanitation Revolving Fund with an initial working capital ofUS$3million. The project was carried out in the cities of Danang, Haiphong, and Quang Ninh. The aim was to provide small loans (US$145) to low-income households for targeted sanitation investments such asseptic tanks, urine diverting/compostinglatrinesor sewer connections. Participating households had to join a savings and credit group of 12 to 20 people, who were required to live near each other to ensure community control. The loans had a catalyst effect for household investment. With loans covering approximately two-thirds of investment costs, households had to find complementary sources of finance (from family and friends). In contrast to a centralised, supply-driven approach, where government institutions design a project with little community consultation and no capacity-building for the community, this approach was strictly demand-driven and thus required the Sanitation Revolving Fund to develop awareness-raising campaigns for sanitation. Managed by the microfinance-experienced Women's Union of Vietnam, the Sanitation Revolving Fund gave 200,000 households the opportunity to finance and build sanitation facilities over a period of seven years. With a leverage effect of up to 25 times the amount of public spending on household investment and repayment rates of almost 100 percent, the fund is seen as a best practice example by its financiers. In 2009 it was considered to be scaled up with further support of the World Bank and the Vietnam Bank for Social Policies.[9][needs update] Small and medium enterprise (SME) loans are used for investments by community groups, for private providers in greenfield contexts, or for rehabilitation measures of water supply and sanitation. Supplied by mature microfinance institutions, these loans are seen as suitable for other suppliers in the value chain such as pit latrine emptiers and tanker suppliers. With the right conditions such as a solid policy environment and clear institutional relationships, there is a market potential for small-scale water supply projects. In comparison to retail loans on the household level, the experience with loan products for SME is fairly limited. These loan programs remain mostly at the pilot level. However, the design of some recent projects using microcredits for community-based service providers in some African countries (such as those of the K-Rep Bank in Kenya and Togo) shows a sustainable expansion potential. In the case of Kenya's K-Rep Bank, the Water and Sanitation Program, which facilitated the project, is already exploring a country-wide scaling up.[citation needed] Kenyahas numerous community-managed small-water enterprises. The Water and Sanitation Program (WSP) has launched an initiative to use microcredits to promote these enterprises. As part of this initiative, the commercial microfinance bankK-Rep Bankprovided loans to 21 community-managed water projects. The Global Partnership on Output-based Aid (GPOBA) supported the programme by providing partial subsidies. Every project is pre-financed with a credit of up to 80 percent of the project costs (averagingUS$80,000). After an independent verification process, certifying a successful completion, a part of the loan is refinanced by a 40 percent output-based aid subsidy. The remaining loan repayments have to be generated from water revenues. In addition, technical-assistance grants are provided to assist with the project development. InTogo, CREPA (Centre Regional pour l'Eau Potable et L'Assainissement à Faible Côut) had encouraged the liberalisation of water services in 2001. As a consequence, six domestic microfinance institutions were preparing microcredit scheme for a shallow borehole (US$3,000) or rainwater-harvesting tank (US$1,000). The loans were originally dedicated to households, which act as a small private provider, selling water in bulk or in buckets. However, the funds were disbursed directly to the private (drilling) companies. In the period from 2001 to 2006, roughly 1,200 water points were built and have been used for small-business activities by the households which participated in that programme.[10][11][needs update] This type of credits has not been used widely.
https://en.wikipedia.org/wiki/Microcredit_for_water_supply_and_sanitation
Inlogic, afunctionally completeset oflogical connectivesorBoolean operatorsis one that can be used to express all possibletruth tablesby combining members of thesetinto aBoolean expression.[1][2]A well-known complete set of connectives is{AND,NOT}. Each of thesingletonsets{NAND}and{NOR}is functionally complete. However, the set{ AND,OR}is incomplete, due to its inability to express NOT. A gate (or set of gates) that is functionally complete can also be called a universal gate (or a universal set of gates). In a context ofpropositional logic, functionally complete sets of connectives are also called (expressively)adequate.[3] From the point of view ofdigital electronics, functional completeness means that every possiblelogic gatecan be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binaryNAND gates, or only binaryNOR gates. Modern texts on logic typically take as primitive some subset of the connectives:conjunction(∧{\displaystyle \land });disjunction(∨{\displaystyle \lor });negation(¬{\displaystyle \neg });material conditional(→{\displaystyle \to }); and possibly thebiconditional(↔{\displaystyle \leftrightarrow }). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (the negation of the disjunction, sometimes denoted↓{\displaystyle \downarrow }) can be expressed as conjunction of two negations: Similarly, the negation of the conjunction, NAND (sometimes denoted as↑{\displaystyle \uparrow }), can be defined in terms of disjunction and negation. Every binary connective can be defined in terms of{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}, which means that set is functionally complete. However, it contains redundancy: this set is not aminimalfunctionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as It follows that the smaller set{¬,∧,∨}{\displaystyle \{\neg ,\land ,\lor \}}is also functionally complete. (Its functional completeness is also proved by theDisjunctive Normal Form Theorem.)[4]But this is still not minimal, as∨{\displaystyle \lor }can be defined as Alternatively,∧{\displaystyle \land }may be defined in terms of∨{\displaystyle \lor }in a similar manner, or∨{\displaystyle \lor }may be defined in terms of→{\displaystyle \rightarrow }: No further simplifications are possible. Hence, every two-element set of connectives containing¬{\displaystyle \neg }and one of{∧,∨,→}{\displaystyle \{\land ,\lor ,\rightarrow \}}is a minimal functionally completesubsetof{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}. Given theBoolean domainB= {0, 1}, a setFof Boolean functionsfi:Bni→Bisfunctionally completeif thecloneonBgenerated by the basic functionsficontains all functionsf:Bn→B, for allstrictly positiveintegersn≥ 1. In other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed in terms of the functionsfi. Since every Boolean function of at least one variable can be expressed in terms of binary Boolean functions,Fis functionally complete if and only if every binary Boolean function can be expressed in terms of the functions inF. A more natural condition would be that the clone generated byFconsist of all functionsf:Bn→B, for all integersn≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible to write anullaryfunction, i.e. a constant expression, in terms ofFifFitself does not contain at least one nullary function. With this stronger definition, the smallest functionally complete sets would have 2 elements. Another natural condition would be that the clone generated byFtogether with the two nullary constant functions be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The example of the Boolean function given byS(x,y,z) =zifx=yandS(x,y,z) =xotherwise shows that this condition is strictly weaker than functional completeness.[5][6][7] Emil Postproved that a set of logical connectives is functionally complete if and only if it is not a subset of any of the following sets of connectives: Post gave a complete description of thelatticeof allclones(sets of operations closed under composition and containing all projections) on the two-element set{T,F}, nowadays calledPost's lattice, which implies the above result as a simple corollary: the five mentioned sets of connectives are exactly the maximal nontrivial clones.[8] When a single logical connective or Boolean operator is functionally complete by itself, it is called aSheffer function[9]or sometimes asole sufficient operator. There are nounaryoperators with this property.NANDandNOR, which aredual to each other, are the only two binary Sheffer functions. These were discovered, but not published, byCharles Sanders Peircearound 1880, and rediscovered independently and published byHenry M. Shefferin 1913.[10]In digital electronics terminology, the binaryNAND gate(↑) and the binaryNOR gate(↓) are the only binaryuniversal logic gates. The following are the minimal functionally complete sets of logical connectives witharity≤ 2:[11] There are no minimal functionally complete sets of more than three at most binary logical connectives.[11]In order to keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator that ignores the first input and outputs the negation of the second can be replaced by a unary negation. Note that an electronic circuit or a software function can be optimized by reuse, to reduce the number of gates. For instance, the "A∧B" operation, when expressed by ↑ gates, is implemented with the reuse of "A ↑ B", Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains. For example, a set ofreversiblegates is called functionally complete, if it can express every reversible operator. The 3-inputFredkin gateis functionally complete reversible gate by itself – a sole sufficient operator. There are many other three-input universal logic gates, such as theToffoli gate. Inquantum computing, theHadamard gateand theT gateare universal, albeit with aslightly more restrictive definitionthan that of functional completeness. There is anisomorphismbetween thealgebra of setsand theBoolean algebra, that is, they have the samestructure. Then, if we map boolean operators into set operators, the "translated" above text are valid also for sets: there are many "minimal complete set of set-theory operators" that can generate any other set relations. The more popular "Minimal complete operator sets" are{¬, ∩}and{¬, ∪}. If theuniversal setis forbidden, set operators are restricted to being falsity (Ø) preserving, and cannot be equivalent to functionally complete Boolean algebra.
https://en.wikipedia.org/wiki/Sole_sufficient_operator
Inmathematics, in the branch ofcomplex analysis, aholomorphic functionon anopen subsetof thecomplex planeis calledunivalentif it isinjective.[1][2] The functionf:z↦2z+z2{\displaystyle f\colon z\mapsto 2z+z^{2}}is univalent in the open unit disc, asf(z)=f(w){\displaystyle f(z)=f(w)}implies thatf(z)−f(w)=(z−w)(z+w+2)=0{\displaystyle f(z)-f(w)=(z-w)(z+w+2)=0}. As the second factor is non-zero in the open unit disc,z=w{\displaystyle z=w}sof{\displaystyle f}is injective. One can prove that ifG{\displaystyle G}andΩ{\displaystyle \Omega }are two openconnectedsets in the complex plane, and is a univalent function such thatf(G)=Ω{\displaystyle f(G)=\Omega }(that is,f{\displaystyle f}issurjective), then the derivative off{\displaystyle f}is never zero,f{\displaystyle f}isinvertible, and its inversef−1{\displaystyle f^{-1}}is also holomorphic. More, one has by thechain rule for allz{\displaystyle z}inG.{\displaystyle G.} Forrealanalytic functions, unlike for complex analytic (that is, holomorphic) functions, these statements fail to hold. For example, consider the function given byf(x)=x3{\displaystyle f(x)=x^{3}}. This function is clearly injective, but its derivative is 0 atx=0{\displaystyle x=0}, and its inverse is not analytic, or even differentiable, on the whole interval(−1,1){\displaystyle (-1,1)}. Consequently, if we enlarge the domain to an open subsetG{\displaystyle G}of the complex plane, it must fail to be injective; and this is the case, since (for example)f(εω)=f(ε){\displaystyle f(\varepsilon \omega )=f(\varepsilon )}(whereω{\displaystyle \omega }is aprimitive cube root of unityandε{\displaystyle \varepsilon }is a positive real number smaller than the radius ofG{\displaystyle G}as a neighbourhood of0{\displaystyle 0}). This article incorporates material fromunivalent analytic functiononPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Univalent_function
Great Western Railway telegraphic codeswere acommercial telegraph codeused to shorten thetelegraphicmessages sent between the stations and offices of the railway. The codes listed below are taken from the 1939 edition of theTelegraph Message Codebook[1]unless stated otherwise. TheGreat Western Railway(GWR) pioneered telegraph communication over the 13 miles (21 km) fromPaddingtontoWest Draytonon 9 April 1839 using theCooke and Wheatstone telegraphequipment. Although this early system fell into disuse after a few years, from 1850 a new contract with theElectric Telegraph Companysaw double-needle telegraphs working at most stations on the line; these were replaced by single-needle machines from 1860.[2]Although used primarily as a safety device to regulate the passage of trains, it was also used to pass messages between the staff. In order to do this quickly and accurately, a number of code words were used to replace complicated or regularly used phrases. The codes were changed from time to time to reflect current needs. By 1922 most railways in the country had agreed on standard code words, although the GWR had an extended list of codes that could only be used within its own network. In 1943 all railways were brought into a single system of codes and the GWR special codes were discontinued.[3] Note: many of these codes could have an extra letter to identify variations, such as Mink A (a 16 ft (4.9 m) ventilated van), or Mink G (a 21 ft (6.4 m) ordinary van). Most of these codes were painted onto the wagons for easy identification. Note: many of these codes could have an extra letter to identify variations, such as Scorpion C (a 45 ft (14 m) carriage truck), or Scorpion D (a 21 ft (6.4 m) carriage truck). The 1939Telegraph Message Codebook contains in excess of 900 code words (around half of which were standard codes also used by other railways) yet very few were the familiar codes seen painted on the side of goods wagons.[1]By using these codes long and complex sentences could be sent using just a few words. Some examples of the codes representing phrases include:
https://en.wikipedia.org/wiki/Great_Western_Railway_telegraphic_codes
Inseismology, theGutenberg–Richter law[1](GR law) expresses the relationship between themagnitudeand total number ofearthquakesin any given region and time period ofat leastthat magnitude. or where Since magnitude is logarithmic, this is an instance of thePareto distribution. The Gutenberg–Richter law is also widely used foracoustic emissionanalysis due to a close resemblance of acoustic emission phenomenon to seismogenesis. The relationship between earthquake magnitude and frequency was first proposed byCharles Francis RichterandBeno Gutenbergin a 1944 paper studying earthquakes in California,[2][3]and generalised in a worldwide study in 1949.[4]This relationship between event magnitude and frequency of occurrence is remarkably common, although the values of a and b may vary significantly from region to region or over time. The parameter b (commonly referred to as the "b-value") is commonly close to 1.0 in seismically active regions. This means that for a given frequency of magnitude 4.0 or larger events there will be 10 times as many magnitude 3.0 or larger quakes and 100 times as many magnitude 2.0 or larger quakes. There is some variation of b-values in the approximate range of 0.5 to 2 depending on the source environment of the region.[5]A notable example of this is duringearthquake swarmswhen b can become as high as 2.5, thus indicating a very high proportion of small earthquakes to large ones. There is debate concerning the interpretation of some observed spatial and temporal variations of b-values. The most frequently cited factors to explain these variations are: the stress applied to the material,[6]the depth,[7]the focal mechanism,[8]the strength heterogeneity of the material,[9]and the proximity of macro-failure. Theb-value decrease observed prior to the failure of samples deformed in the laboratory[10]has led to the suggestion that this is a precursor to major macro-failure.[11]Statistical physics provides a theoretical framework for explaining both the steadiness of the Gutenberg–Richter law for large catalogs and its evolution when the macro-failure is approached, but application to earthquake forecasting is currently out of reach.[12]Alternatively, a b-value significantly different from 1.0 may suggest a problem with the data set; e.g. it is incomplete or contains errors in calculating magnitude. There is an apparent b-value decrease for smaller magnitude event ranges in all empirical catalogues of earthquakes. This effect is described as "roll-off" of the b-value, a description due to the plot of the logarithmic version of the GR law becoming flatter at the low magnitude end of the plot. This may in large part be caused by incompleteness of any data set due to the inability to detect and characterize small events. That is, many low-magnitude earthquakes are not catalogued because fewer stations detect and record them due to decreasing instrumental signal to noise levels. Some modern models of earthquake dynamics, however, predict a physical roll-off in the earthquake size distribution.[13] Thea-valuerepresents the total seismicity rate of the region. This is more easily seen when the GR law is expressed in terms of the total number of events: where the total number of events (above M=0). Since10a{\displaystyle 10^{a}\ }is the total number of events,10−bM{\displaystyle 10^{-bM}\ }must be the probability of those events. Modern attempts to understand the law involve theories ofself-organized criticalityorself similarity. New models show a generalization of the original Gutenberg–Richter model. Among these is the one released by Oscar Sotolongo-Costa and A. Posadas in 2004,[14]of which R. Silvaet al.presented the following modified form in 2006,[15] whereNis the total number of events,ais a proportionality constant andqrepresents the non-extensivity parameter introduced by Constantino Tsallis to characterize systems not explained by the Boltzmann–Gibbs statistical form for equilibrium physical systems. It is possible to see in an article published by N. V. Sarlis, E. S. Skordas, and P. A. Varotsos,[16]that above some magnitude threshold this equation reduces to original Gutenberg–Richter form with In addition, another generalization was obtained from the solution of the generalized logistic equation.[17]In this model, values of parameterbwere found for events recorded in Central Atlantic, Canary Islands, Magellan Mountains and the Sea of Japan. The generalized logistic equation is applied toacoustic emissionin concrete by N. Burud and J. M. Chandra Kishen.[18]Burud showed the b-value obtained from generalized logistic equation monotonically increases with damage and referred it as a damage compliant b-value. A new generalization was published using Bayesian statistical techniques,[19]from which an alternative form for parameterbof Gutenberg–Richter is presented. The model was applied to intense earthquakes occurred in Chile, from the year 2010 to the year 2016.
https://en.wikipedia.org/wiki/Gutenberg%E2%80%93Richter_law
Discounted maximum loss, also known asworst-caserisk measure, is thepresent valueof the worst-case scenario for a financialportfolio. In investment, in order to protect the value of an investment, one must consider all possible alternatives to the initial investment. How one does this comes down to personal preference; however, the worst possible alternative is generally considered to be the benchmark against which all other options are measured. Thepresent valueof this worst possible outcome is the discounted maximum loss. Given a finite state spaceS{\displaystyle S}, letX{\displaystyle X}be a portfolio with profitXs{\displaystyle X_{s}}fors∈S{\displaystyle s\in S}. IfX1:S,...,XS:S{\displaystyle X_{1:S},...,X_{S:S}}is theorder statisticthe discounted maximum loss is simply−δX1:S{\displaystyle -\delta X_{1:S}}, whereδ{\displaystyle \delta }is thediscount factor. Given a generalprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}, letX{\displaystyle X}be a portfolio with discounted returnδX(ω){\displaystyle \delta X(\omega )}for stateω∈Ω{\displaystyle \omega \in \Omega }. Then the discounted maximum loss can be written as−ess.inf⁡δX=−supδ{x∈R:P(X≥x)=1}{\displaystyle -\operatorname {ess.inf} \delta X=-\sup \delta \{x\in \mathbb {R} :\mathbb {P} (X\geq x)=1\}}whereess.inf{\displaystyle \operatorname {ess.inf} }denotes theessential infimum.[1] As an example, assume that a portfolio is currently worth 100, and thediscount factoris 0.8 (corresponding to aninterest rateof 25%): In this case the maximum loss is from 100 to 20 = 80, so the discounted maximum loss is simply80×0.8=64{\displaystyle 80\times 0.8=64}
https://en.wikipedia.org/wiki/Discounted_maximum_loss
InBoolean algebra, thealgebraic normal form(ANF),ring sum normal form(RSNForRNF),Zhegalkin normal form, orReed–Muller expansionis a way of writingpropositional logicformulas in one of three subforms: Formulas written in ANF are also known asZhegalkin polynomialsand Positive Polarity (or Parity)Reed–Muller expressions(PPRM).[1] ANF is acanonical form, which means that twologically equivalentformulas will convert to the same ANF, easily showing whether two formulas are equivalent forautomated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctiveanddisjunctivenormal forms also require recording whether each variable is negated or not.Negation normal formis unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent. Putting a formula into ANF also makes it easy to identifylinearfunctions (used, for example, inlinear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedbackshift registerscan also be deduced from certain properties of the feedback function in ANF. There are straightforward ways to perform the standard Boolean operations on ANF inputs in order to get ANF results. XOR (logical exclusive disjunction) is performed directly: NOT (logical negation) is XORing 1:[2] AND (logical conjunction) isdistributed algebraically[3] OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b)[4](easier when both operands have purely true terms) or a ⊕ b ⊕ ab[5](easier otherwise): Each variable in a formula is already in pure ANF, so one only needs to perform the formula's Boolean operations as shown above to get the entire formula into ANF. For example: ANF is sometimes described in an equivalent way: There are only four functions with one argument: To represent a function with multiple arguments one can use the following equality: Indeed, Since bothg{\displaystyle g}andh{\displaystyle h}have fewer arguments thanf{\displaystyle f}it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF off(x,y)=x∨y{\displaystyle f(x,y)=x\lor y}(logical or):
https://en.wikipedia.org/wiki/Algebraic_normal_form
Incomputational complexity theory,NP(nondeterministic polynomial time) is acomplexity classused to classifydecision problems. NP is thesetof decision problems for which theproblem instances, where the answer is "yes", haveproofsverifiable inpolynomial timeby adeterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by anondeterministic Turing machine.[2][Note 1] The first definition is the basis for the abbreviation NP; "nondeterministic, polynomial time". These two definitions are equivalent because the algorithm based on the Turing machine consists of two phases, the first of which consists of a guess about the solution, which is generated in a nondeterministic way, while the second phase consists of a deterministic algorithm that verifies whether the guess is a solution to the problem.[3] The complexity classP(all problems solvable, deterministically, in polynomial time) is contained in NP (problems where solutions can be verified in polynomial time), because if a problem is solvable in polynomial time, then a solution is also verifiable in polynomial time by simply solving the problem. It is widely believed, but not proven, thatP is smaller than NP, in other words, that decision problems exist that cannot be solved in polynomial time even though their solutions can be checked in polynomial time. The hardest problems in NP are calledNP-completeproblems. An algorithm solving such a problem in polynomial time is also able to solve any other NP problem in polynomial time. If P were in fact equal to NP, then a polynomial-time algorithm would exist for solving NP-complete, and by corollary, all NP problems.[4] The complexity class NP is related to the complexity classco-NP, for which the answer "no" can be verified in polynomial time. Whether or notNP = co-NPis another outstanding question in complexity theory.[5] The complexity class NP can be defined in terms ofNTIMEas follows: whereNTIME(nk){\displaystyle {\mathsf {NTIME}}(n^{k})}is the set of decision problems that can be solved by anondeterministic Turing machineinO(nk){\displaystyle O(n^{k})}time. Equivalently, NP can be defined using deterministic Turing machines as verifiers. AlanguageLis in NP if and only if there exist polynomialspandq, and a deterministic Turing machineM, such that Manycomputer scienceproblems are contained in NP, like decision versions of manysearchand optimization problems. In order to explain the verifier-based definition of NP, consider thesubset sum problem: Assume that we are given someintegers, {−7, −3, −2, 5, 8}, and we wish to know whether some of these integers sum up to zero. Here the answer is "yes", since the integers {−3, −2, 5} corresponds to the sum(−3) + (−2) + 5 = 0. To answer whether some of the integers add to zero we can create an algorithm that obtains all the possible subsets. As the number of integers that we feed into the algorithm becomes larger, both the number of subsets and the computation time grows exponentially. But notice that if we are given a particular subset, we canefficiently verifywhether the subset sum is zero, by summing the integers of the subset. If the sum is zero, that subset is aprooforwitnessfor the answer is "yes". An algorithm that verifies whether a given subset has sum zero is averifier. Clearly, summing the integers of a subset can be done in polynomial time, and the subset sum problem is therefore in NP. The above example can be generalized for any decision problem. Given any instance I of problemΠ{\displaystyle \Pi }and witness W, if there exists averifierV so that given the ordered pair (I, W) as input, V returns "yes" in polynomial time if the witness proves that the answer is "yes" or "no" in polynomial time otherwise, thenΠ{\displaystyle \Pi }is in NP. The "no"-answer version of this problem is stated as: "given a finite set of integers, does every non-empty subset have a nonzero sum?". The verifier-based definition of NP doesnotrequire an efficient verifier for the "no"-answers. The class of problems with such verifiers for the "no"-answers is called co-NP. In fact, it is an open question whether all problems in NP also have verifiers for the "no"-answers and thus are in co-NP. In some literature the verifier is called the "certifier", and the witness the "certificate".[2] Equivalent to the verifier-based definition is the following characterization: NP is the class ofdecision problemssolvable by anondeterministic Turing machinethat runs inpolynomial time. That is to say, a decision problemΠ{\displaystyle \Pi }is in NP wheneverΠ{\displaystyle \Pi }is recognized by some polynomial-time nondeterministic Turing machineM{\displaystyle M}with anexistential acceptance condition, meaning thatw∈Π{\displaystyle w\in \Pi }if and only if some computation path ofM(w){\displaystyle M(w)}leads to an accepting state. This definition is equivalent to the verifier-based definition because a nondeterministic Turing machine could solve an NP problem in polynomial time by nondeterministically selecting a certificate and running the verifier on the certificate. Similarly, if such a machine exists, then a polynomial time verifier can naturally be constructed from it. In this light, we can define co-NP dually as the class of decision problems recognizable by polynomial-time nondeterministic Turing machines with an existential rejection condition. Since an existential rejection condition is exactly the same thing as auniversal acceptance condition, we can understand theNP vs. co-NPquestion as asking whether the existential and universal acceptance conditions have the same expressive power for the class of polynomial-time nondeterministic Turing machines. NP is closed underunion,intersection,concatenation,Kleene starandreversal. It is not known whether NP is closed undercomplement(this question is the so-called "NP versus co-NP" question). Because of the many important problems in this class, there have been extensive efforts to find polynomial-time algorithms for problems in NP. However, there remain a large number of problems in NP that defy such attempts, seeming to requiresuper-polynomial time. Whether these problems are not decidable in polynomial time is one of the greatest open questions incomputer science(seePversus NP ("P = NP") problemfor an in-depth discussion). An important notion in this context is the set ofNP-completedecision problems, which is a subset of NP and might be informally described as the "hardest" problems in NP. If there is a polynomial-time algorithm for evenoneof them, then there is a polynomial-time algorithm forallthe problems in NP. Because of this, and because dedicated research has failed to find a polynomial algorithm for any NP-complete problem, once a problem has been proven to be NP-complete, this is widely regarded as a sign that a polynomial algorithm for this problem is unlikely to exist. However, in practical uses, instead of spending computational resources looking for an optimal solution, a good enough (but potentially suboptimal) solution may often be found in polynomial time. Also, the real-life applications of some problems are easier than their theoretical equivalents. The two definitions of NP as the class of problems solvable by a nondeterministicTuring machine(TM) in polynomial time and the class of problems verifiable by a deterministic Turing machine in polynomial time are equivalent. The proof is described by many textbooks, for example, Sipser'sIntroduction to the Theory of Computation, section 7.3. To show this, first, suppose we have a deterministic verifier. A non-deterministic machine can simply nondeterministically run the verifier on all possible proof strings (this requires only polynomially many steps because it can nondeterministically choose the next character in the proof string in each step, and the length of the proof string must be polynomially bounded). If any proof is valid, some path will accept; if no proof is valid, the string is not in the language and it will reject. Conversely, suppose we have a non-deterministic TM called A accepting a given language L. At each of its polynomially many steps, the machine'scomputation treebranches in at most a finite number of directions. There must be at least one accepting path, and the string describing this path is the proof supplied to the verifier. The verifier can then deterministically simulate A, following only the accepting path, and verifying that it accepts at the end. If A rejects the input, there is no accepting path, and the verifier will always reject. NP contains all problems inP, since one can verify any instance of the problem by simply ignoring the proof and solving it. NP is contained inPSPACE—to show this, it suffices to construct a PSPACE machine that loops over all proof strings and feeds each one to a polynomial-time verifier. Since a polynomial-time machine can only read polynomially many bits, it cannot use more than polynomial space, nor can it read a proof string occupying more than polynomial space (so we do not have to consider proofs longer than this). NP is also contained inEXPTIME, since the same algorithm operates in exponential time. co-NP contains those problems that have a simple proof fornoinstances, sometimes called counterexamples. For example,primality testingtrivially lies in co-NP, since one can refute the primality of an integer by merely supplying a nontrivial factor. NP and co-NP together form the first level in thepolynomial hierarchy, higher only than P. NP is defined using only deterministic machines. If we permit the verifier to be probabilistic (this, however, is not necessarily a BPP machine[6]), we get the classMAsolvable using anArthur–Merlin protocolwith no communication from Arthur to Merlin. The relationship betweenBPPandNPis unknown: it is not known whetherBPPis asubsetofNP,NPis a subset ofBPPor neither. IfNPis contained inBPP, which is considered unlikely since it would imply practical solutions forNP-completeproblems, thenNP=RPandPH⊆BPP.[7] NP is a class ofdecision problems; the analogous class of function problems isFNP. The only known strict inclusions come from thetime hierarchy theoremand thespace hierarchy theorem, and respectively they areNP⊊NEXPTIME{\displaystyle {\mathsf {NP\subsetneq NEXPTIME}}}andNP⊊EXPSPACE{\displaystyle {\mathsf {NP\subsetneq EXPSPACE}}}. In terms ofdescriptive complexity theory, NP corresponds precisely to the set of languages definable by existentialsecond-order logic(Fagin's theorem). NP can be seen as a very simple type ofinteractive proof system, where the prover comes up with the proof certificate and the verifier is a deterministic polynomial-time machine that checks it. It is complete because the right proof string will make it accept if there is one, and it is sound because the verifier cannot accept if there is no acceptable proof string. A major result of complexity theory is that NP can be characterized as the problems solvable byprobabilistically checkable proofswhere the verifier uses O(logn) random bits and examines only a constant number of bits of the proof string (the classPCP(logn, 1)). More informally, this means that the NP verifier described above can be replaced with one that just "spot-checks" a few places in the proof string, and using a limited number of coin flips can determine the correct answer with high probability. This allows several results about the hardness ofapproximation algorithmsto be proven. All problems inP, denotedP⊆NP{\displaystyle {\mathsf {P\subseteq NP}}}. Given a certificate for a problem inP, we can ignore the certificate and just solve the problem in polynomial time. The decision problem version of theinteger factorization problem: given integersnandk, is there a factorfwith 1 <f<kandfdividingn?[8] EveryNP-completeproblem is in NP. TheBoolean satisfiability problem(SAT), where we want to know whether or not a certain formula inpropositional logicwithBoolean variablesis true for some value of the variables.[9] The decision version of thetravelling salesman problemis in NP. Given an input matrix of distances betweenncities, the problem is to determine if there is a route visiting all cities with total distance less thank. A proof can simply be a list of the cities. Then verification can clearly be done in polynomial time. It simply adds the matrix entries corresponding to the paths between the cities. Anondeterministic Turing machinecan find such a route as follows: One can think of each guess as "forking" a new copy of the Turing machine to follow each of the possible paths forward, and if at least one machine finds a route of distance less thank, that machine accepts the input. (Equivalently, this can be thought of as a single Turing machine that always guesses correctly) Abinary searchon the range of possible distances can convert the decision version of Traveling Salesman to the optimization version, by calling the decision version repeatedly (a polynomial number of times).[10][8] Thesubgraph isomorphism problemof determining whether graphGcontains a subgraph that is isomorphic to graphH.[11]
https://en.wikipedia.org/wiki/Nondeterministic_polynomial_time
Metacompilationis acomputationwhich involvesmetasystemtransitions (MST) from a computing machineMto a metamachineM'which controls, analyzes and imitates the work ofM.Semantics-based program transformation, such aspartial evaluationand supercompilation (SCP), is metacomputation. Metasystem transitions may be repeated, as when a program transformer gets transformed itself. In this manner MST hierarchies of any height can be formed. The Fox[clarification needed]paper reviews one strain of research which was started inRussiabyValentin Turchin'sREFALsystem in the late 1960s-early 1970s and became known for the development of supercompilation as a distinct method ofprogram transformation. After a brief description of the history of this research line, the paper concentrates on those results and problems where supercompilation is combined with repeated metasystem transitions. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Metacompilation
Indifferential geometry, thefour-gradient(or4-gradient)∂{\displaystyle {\boldsymbol {\partial }}}is thefour-vectoranalogue of thegradient∇→{\displaystyle {\vec {\boldsymbol {\nabla }}}}fromvector calculus. Inspecial relativityand inquantum mechanics, the four-gradient is used to define the properties and relations between the various physical four-vectors andtensors. This article uses the(+ − − −)metric signature. SR and GR are abbreviations forspecial relativityandgeneral relativityrespectively. c{\displaystyle c}indicates thespeed of lightin vacuum. ημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta _{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}is the flatspacetimemetricof SR. There are alternate ways of writing four-vector expressions in physics: The Latin tensor index ranges in{1, 2, 3},and represents a 3-space vector, e.g.Ai=(a1,a2,a3)=a→{\displaystyle A^{i}=\left(a^{1},a^{2},a^{3}\right)={\vec {\mathbf {a} }}}. The Greek tensor index ranges in{0, 1, 2, 3},and represents a 4-vector, e.g.Aμ=(a0,a1,a2,a3)=A{\displaystyle A^{\mu }=\left(a^{0},a^{1},a^{2},a^{3}\right)=\mathbf {A} }. In SR physics, one typically uses a concise blend, e.g.A=(a0,a→){\displaystyle \mathbf {A} =\left(a^{0},{\vec {\mathbf {a} }}\right)}, wherea0{\displaystyle a^{0}}represents the temporal component anda→{\displaystyle {\vec {\mathbf {a} }}}represents the spatial 3-component. Tensors in SR are typically 4D(m,n){\displaystyle (m,n)}-tensors, withm{\displaystyle m}upper indices andn{\displaystyle n}lower indices, with the 4D indicating 4 dimensions = the number of values each index can take. The tensor contraction used in theMinkowski metriccan go to either side (seeEinstein notation):[1]: 56, 151–152, 158–161A⋅B=AμημνBν=AνBν=AμBμ=∑μ=03aμbμ=a0b0−∑i=13aibi=a0b0−a→⋅b→{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }\eta _{\mu \nu }B^{\nu }=A_{\nu }B^{\nu }=A^{\mu }B_{\mu }=\sum _{\mu =0}^{3}a^{\mu }b_{\mu }=a^{0}b^{0}-\sum _{i=1}^{3}a^{i}b^{i}=a^{0}b^{0}-{\vec {\mathbf {a} }}\cdot {\vec {\mathbf {b} }}} The 4-gradient covariant components compactly written infour-vectorandRicci calculusnotation are:[2][3]: 16∂∂Xμ=(∂0,∂1,∂2,∂3)=(∂0,∂i)=(1c∂∂t,∇→)=(∂tc,∇→)=(∂tc,∂x,∂y,∂z)=∂μ=,μ{\displaystyle {\dfrac {\partial }{\partial X^{\mu }}}=\left(\partial _{0},\partial _{1},\partial _{2},\partial _{3}\right)=\left(\partial _{0},\partial _{i}\right)=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},\partial _{x},\partial _{y},\partial _{z}\right)=\partial _{\mu }={}_{,\mu }} Thecommain the last part above,μ{\displaystyle {}_{,\mu }}implies thepartial differentiationwith respect to 4-positionXμ{\displaystyle X^{\mu }}. The contravariant components are:[2][3]: 16∂=∂α=ηαβ∂β=(∂0,∂1,∂2,∂3)=(∂0,∂i)=(1c∂∂t,−∇→)=(∂tc,−∇→)=(∂tc,−∂x,−∂y,−∂z){\displaystyle {\boldsymbol {\partial }}=\partial ^{\alpha }=\eta ^{\alpha \beta }\partial _{\beta }=\left(\partial ^{0},\partial ^{1},\partial ^{2},\partial ^{3}\right)=\left(\partial ^{0},\partial ^{i}\right)=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},-{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},-\partial _{x},-\partial _{y},-\partial _{z}\right)} Alternative symbols to∂α{\displaystyle \partial _{\alpha }}are◻{\displaystyle \Box }andD(although◻{\displaystyle \Box }can also signify∂μ∂μ{\displaystyle \partial ^{\mu }\partial _{\mu }}as thed'Alembert operator). In GR, one must use the more generalmetric tensorgαβ{\displaystyle g^{\alpha \beta }}and the tensorcovariant derivative∇μ=;μ{\displaystyle \nabla _{\mu }={}_{;\mu }}(not to be confused with the vector 3-gradient∇→{\displaystyle {\vec {\nabla }}}). The covariant derivative∇ν{\displaystyle \nabla _{\nu }}incorporates the 4-gradient∂ν{\displaystyle \partial _{\nu }}plusspacetimecurvatureeffects via theChristoffel symbolsΓμσν{\displaystyle \Gamma ^{\mu }{}_{\sigma \nu }} Thestrong equivalence principlecan be stated as:[4]: 184 "Any physical law which can be expressed in tensor notation in SR has exactly the same form in a locally inertial frame of a curved spacetime." The 4-gradient commas (,) in SR are simply changed to covariant derivative semi-colons (;) in GR, with the connection between the two usingChristoffel symbols. This is known in relativity physics as the "comma to semi-colon rule". So, for example, ifTμν,μ=0{\displaystyle T^{\mu \nu }{}_{,\mu }=0}in SR, thenTμν;μ=0{\displaystyle T^{\mu \nu }{}_{;\mu }=0}in GR. On a (1,0)-tensor or 4-vector this would be:[4]: 136–139∇βVα=∂βVα+VμΓαμβVα;β=Vα,β+VμΓαμβ{\displaystyle {\begin{aligned}\nabla _{\beta }V^{\alpha }&=\partial _{\beta }V^{\alpha }+V^{\mu }\Gamma ^{\alpha }{}_{\mu \beta }\\[0.1ex]V^{\alpha }{}_{;\beta }&=V^{\alpha }{}_{,\beta }+V^{\mu }\Gamma ^{\alpha }{}_{\mu \beta }\end{aligned}}} On a (2,0)-tensor this would be:∇νTμν=∂νTμν+ΓμσνTσν+ΓνσνTμσTμν;ν=Tμν,ν+ΓμσνTσν+ΓνσνTμσ{\displaystyle {\begin{aligned}\nabla _{\nu }T^{\mu \nu }&=\partial _{\nu }T^{\mu \nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }\\T^{\mu \nu }{}_{;\nu }&=T^{\mu \nu }{}_{,\nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }\end{aligned}}} The 4-gradient is used in a number of different ways inspecial relativity(SR): Throughout this article the formulas are all correct for the flat spacetimeMinkowski coordinatesof SR, but have to be modified for the more general curved space coordinates ofgeneral relativity(GR). Divergenceis avector operatorthat produces a signed scalar field giving the quantity of avector field'ssourceat each point. Note that in this metric signature [+,−,−,−] the 4-Gradient has a negative spatial component. It gets canceled when taking the 4D dot product since the Minkowski Metric is Diagonal[+1,−1,−1,−1]. The 4-divergence of the4-positionXμ=(ct,x→){\displaystyle X^{\mu }=\left(ct,{\vec {\mathbf {x} }}\right)}gives thedimensionofspacetime:∂⋅X=∂μημνXν=∂νXν=(∂tc,−∇→)⋅(ct,x→)=∂tc(ct)+∇→⋅x→=(∂tt)+(∂xx+∂yy+∂zz)=(1)+(3)=4{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {X} =\partial ^{\mu }\eta _{\mu \nu }X^{\nu }=\partial _{\nu }X^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot (ct,{\vec {x}})={\frac {\partial _{t}}{c}}(ct)+{\vec {\nabla }}\cdot {\vec {x}}=(\partial _{t}t)+(\partial _{x}x+\partial _{y}y+\partial _{z}z)=(1)+(3)=4} The 4-divergence of the4-current densityJμ=(ρc,j→)=ρoUμ=ρoγ(c,u→)=(ρc,ρu→){\displaystyle J^{\mu }=\left(\rho c,{\vec {\mathbf {j} }}\right)=\rho _{o}U^{\mu }=\rho _{o}\gamma \left(c,{\vec {\mathbf {u} }}\right)=\left(\rho c,\rho {\vec {\mathbf {u} }}\right)}gives aconservation law– theconservation of charge:[1]: 103–107∂⋅J=∂μημνJν=∂νJν=(∂tc,−∇→)⋅(ρc,j→)=∂tc(ρc)+∇→⋅j→=∂tρ+∇→⋅j→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {J} =\partial ^{\mu }\eta _{\mu \nu }J^{\nu }=\partial _{\nu }J^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot (\rho c,{\vec {j}})={\frac {\partial _{t}}{c}}(\rho c)+{\vec {\nabla }}\cdot {\vec {j}}=\partial _{t}\rho +{\vec {\nabla }}\cdot {\vec {j}}=0} This means that the time rate of change of the charge density must equal the negative spatial divergence of the current density∂tρ=−∇→⋅j→{\displaystyle \partial _{t}\rho =-{\vec {\nabla }}\cdot {\vec {j}}}. In other words, the charge inside a box cannot just change arbitrarily, it must enter and leave the box via a current. This is acontinuity equation. The 4-divergence of the4-number flux(4-dust)Nμ=(nc,n→)=noUμ=noγ(c,u→)=(nc,nu→){\displaystyle N^{\mu }=\left(nc,{\vec {\mathbf {n} }}\right)=n_{o}U^{\mu }=n_{o}\gamma \left(c,{\vec {\mathbf {u} }}\right)=\left(nc,n{\vec {\mathbf {u} }}\right)}is used in particle conservation:[4]: 90–110∂⋅N=∂μημνNν=∂νNν=(∂tc,−∇→)⋅(nc,nu→)=∂tc(nc)+∇→⋅nu→=∂tn+∇→⋅nu→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {N} =\partial ^{\mu }\eta _{\mu \nu }N^{\nu }=\partial _{\nu }N^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot \left(nc,n{\vec {\mathbf {u} }}\right)={\frac {\partial _{t}}{c}}\left(nc\right)+{\vec {\nabla }}\cdot n{\vec {\mathbf {u} }}=\partial _{t}n+{\vec {\nabla }}\cdot n{\vec {\mathbf {u} }}=0} This is aconservation lawfor the particle number density, typically something like baryon number density. The 4-divergence of theelectromagnetic 4-potentialAμ=(ϕc,a→){\textstyle A^{\mu }=\left({\frac {\phi }{c}},{\vec {\mathbf {a} }}\right)}is used in theLorenz gauge condition:[1]: 105–107∂⋅A=∂μημνAν=∂νAν=(∂tc,−∇→)⋅(ϕc,a→)=∂tc(ϕc)+∇→⋅a→=∂tϕc2+∇→⋅a→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {A} =\partial ^{\mu }\eta _{\mu \nu }A^{\nu }=\partial _{\nu }A^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot \left({\frac {\phi }{c}},{\vec {a}}\right)={\frac {\partial _{t}}{c}}\left({\frac {\phi }{c}}\right)+{\vec {\nabla }}\cdot {\vec {a}}={\frac {\partial _{t}\phi }{c^{2}}}+{\vec {\nabla }}\cdot {\vec {a}}=0} This is the equivalent of aconservation lawfor the EM 4-potential. The 4-divergence of the transverse traceless 4D (2,0)-tensorhTTμν{\displaystyle h_{TT}^{\mu \nu }}representing gravitational radiation in the weak-field limit (i.e. freely propagating far from the source). The transverse condition∂⋅hTTμν=∂μhTTμν=0{\displaystyle {\boldsymbol {\partial }}\cdot h_{TT}^{\mu \nu }=\partial _{\mu }h_{TT}^{\mu \nu }=0}is the equivalent of a conservation equation for freely propagating gravitational waves. The 4-divergence of thestress–energy tensorTμν{\displaystyle T^{\mu \nu }}as the conservedNoether currentassociated withspacetimetranslations, gives four conservation laws in SR:[4]: 101–106 Theconservation of energy(temporal direction) and theconservation of linear momentum(3 separate spatial directions).∂⋅Tμν=∂νTμν=Tμν,ν=0μ=(0,0,0,0){\displaystyle {\boldsymbol {\partial }}\cdot T^{\mu \nu }=\partial _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }=0^{\mu }=(0,0,0,0)} It is often written as:∂νTμν=Tμν,ν=0{\displaystyle \partial _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }=0}where it is understood that the single zero is actually a 4-vector zero0μ=(0,0,0,0){\displaystyle 0^{\mu }=(0,0,0,0)}. When the conservation of the stress–energy tensor(∂νTμν=0μ{\displaystyle \partial _{\nu }T^{\mu \nu }=0^{\mu }})for aperfect fluidis combined with the conservation of particle number density (∂⋅N=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {N} =0}), both utilizing the 4-gradient, one can derive therelativistic Euler equations, which influid mechanicsandastrophysicsare a generalization of theEuler equationsthat account for the effects ofspecial relativity. These equations reduce to the classical Euler equations if the fluid 3-space velocity ismuch lessthan the speed of light, the pressure is much less than theenergy density, and the latter is dominated by the rest mass density. In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show thatangular momentum(relativistic angular momentum) is also conserved:∂ν(xαTμν−xμTαν)=(xαTμν−xμTαν),ν=0αμ{\displaystyle \partial _{\nu }\left(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu }\right)=\left(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu }\right)_{,\nu }=0^{\alpha \mu }}where this zero is actually a (2,0)-tensor zero. TheJacobian matrixis thematrixof all first-orderpartial derivativesof avector-valued function. The 4-gradient∂μ{\displaystyle \partial ^{\mu }}acting on the4-positionXν{\displaystyle X^{\nu }}gives the SRMinkowski spacemetricημν{\displaystyle \eta ^{\mu \nu }}:[3]: 16∂[X]=∂μ[Xν]=Xν,μ=(∂tc,−∇→)[(ct,x→)]=(∂tc,−∂x,−∂y,−∂z)[(ct,x,y,z)],=[∂tcct∂tcx∂tcy∂tcz−∂xct−∂xx−∂xy−∂xz−∂yct−∂yx−∂yy−∂yz−∂zct−∂zx−∂zy−∂zz]=[10000−10000−10000−1]=diag⁡[1,−1,−1,−1]=ημν.{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}[\mathbf {X} ]=\partial ^{\mu }[X^{\nu }]=X^{\nu _{,}\mu }&=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\left[\left(ct,{\vec {x}}\right)\right]=\left({\frac {\partial _{t}}{c}},-\partial _{x},-\partial _{y},-\partial _{z}\right)[(ct,x,y,z)],\\[3pt]&={\begin{bmatrix}{\frac {\partial _{t}}{c}}ct&{\frac {\partial _{t}}{c}}x&{\frac {\partial _{t}}{c}}y&{\frac {\partial _{t}}{c}}z\\-\partial _{x}ct&-\partial _{x}x&-\partial _{x}y&-\partial _{x}z\\-\partial _{y}ct&-\partial _{y}x&-\partial _{y}y&-\partial _{y}z\\-\partial _{z}ct&-\partial _{z}x&-\partial _{z}y&-\partial _{z}z\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}\\[3pt]&=\operatorname {diag} [1,-1,-1,-1]=\eta ^{\mu \nu }.\end{aligned}}} For the Minkowski metric, the components[ημμ]=1/[ημμ]{\displaystyle \left[\eta ^{\mu \mu }\right]=1/\left[\eta _{\mu \mu }\right]}(μ{\displaystyle \mu }not summed), with non-diagonal components all zero. For the Cartesian Minkowski Metric, this givesημν=ημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta ^{\mu \nu }=\eta _{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}. Generally,ημν=δμν=diag⁡[1,1,1,1]{\displaystyle \eta _{\mu }^{\nu }=\delta _{\mu }^{\nu }=\operatorname {diag} [1,1,1,1]}, whereδμν{\displaystyle \delta _{\mu }^{\nu }}is the 4DKronecker delta. The Lorentz transformation is written in tensor form as[4]: 69Xμ′=Λνμ′Xν{\displaystyle X^{\mu '}=\Lambda _{\nu }^{~\mu '}X^{\nu }}and sinceΛνμ′{\displaystyle \Lambda _{\nu }^{\mu '}}are just constants, then∂Xμ′∂Xν=Λνμ′{\displaystyle {\dfrac {\partial X^{\mu '}}{\partial X^{\nu }}}=\Lambda _{\nu }^{\mu '}} Thus, by definition of the 4-gradient∂ν[Xμ′]=(∂∂Xν)[Xμ′]=∂Xμ′∂Xν=Λνμ′{\displaystyle \partial _{\nu }\left[X^{\mu '}\right]=\left({\dfrac {\partial }{\partial X^{\nu }}}\right)\left[X^{\mu '}\right]={\dfrac {\partial X^{\mu '}}{\partial X^{\nu }}}=\Lambda _{\nu }^{\mu '}} This identity is fundamental. Components of the 4-gradient transform according to the inverse of the components of 4-vectors. So the 4-gradient is the "archetypal" one-form. The scalar product of4-velocityUμ{\displaystyle U^{\mu }}with the 4-gradient gives thetotal derivativewith respect toproper timeddτ{\displaystyle {\frac {d}{d\tau }}}:[1]: 58–59U⋅∂=Uμημν∂ν=γ(c,u→)⋅(∂tc,−∇→)=γ(c∂tc+u→⋅∇→)=γ(∂t+dxdt∂x+dydt∂y+dzdt∂z)=γddt=ddτddτ=dXμdXμddτ=dXμdτddXμ=Uμ∂μ=U⋅∂{\displaystyle {\begin{aligned}\mathbf {U} \cdot {\boldsymbol {\partial }}&=U^{\mu }\eta _{\mu \nu }\partial ^{\nu }=\gamma \left(c,{\vec {u}}\right)\cdot \left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)=\gamma \left(c{\frac {\partial _{t}}{c}}+{\vec {u}}\cdot {\vec {\nabla }}\right)=\gamma \left(\partial _{t}+{\frac {dx}{dt}}\partial _{x}+{\frac {dy}{dt}}\partial _{y}+{\frac {dz}{dt}}\partial _{z}\right)=\gamma {\frac {d}{dt}}={\frac {d}{d\tau }}\\{\frac {d}{d\tau }}&={\frac {dX^{\mu }}{dX^{\mu }}}{\frac {d}{d\tau }}={\frac {dX^{\mu }}{d\tau }}{\frac {d}{dX^{\mu }}}=U^{\mu }\partial _{\mu }=\mathbf {U} \cdot {\boldsymbol {\partial }}\end{aligned}}} The fact thatU⋅∂{\displaystyle \mathbf {U} \cdot {\boldsymbol {\partial }}}is aLorentz scalar invariantshows that thetotal derivativewith respect toproper timeddτ{\displaystyle {\frac {d}{d\tau }}}is likewise a Lorentz scalar invariant. So, for example, the4-velocityUμ{\displaystyle U^{\mu }}is the derivative of the4-positionXμ{\displaystyle X^{\mu }}with respect to proper time:ddτX=(U⋅∂)X=U⋅∂[X]=Uα⋅ημν=Uαηανημν=Uαδαμ=Uμ=U{\displaystyle {\frac {d}{d\tau }}\mathbf {X} =(\mathbf {U} \cdot {\boldsymbol {\partial }})\mathbf {X} =\mathbf {U} \cdot {\boldsymbol {\partial }}[\mathbf {X} ]=U^{\alpha }\cdot \eta ^{\mu \nu }=U^{\alpha }\eta _{\alpha \nu }\eta ^{\mu \nu }=U^{\alpha }\delta _{\alpha }^{\mu }=U^{\mu }=\mathbf {U} }orddτX=γddtX=γddt(ct,x→)=γ(ddtct,ddtx→)=γ(c,u→)=U{\displaystyle {\frac {d}{d\tau }}\mathbf {X} =\gamma {\frac {d}{dt}}\mathbf {X} =\gamma {\frac {d}{dt}}\left(ct,{\vec {x}}\right)=\gamma \left({\frac {d}{dt}}ct,{\frac {d}{dt}}{\vec {x}}\right)=\gamma \left(c,{\vec {u}}\right)=\mathbf {U} } Another example, the4-accelerationAμ{\displaystyle A^{\mu }}is the proper-time derivative of the4-velocityUμ{\displaystyle U^{\mu }}:ddτU=(U⋅∂)U=U⋅∂[U]=Uαηαμ∂μ[Uν]=Uαηαμ[∂tcγc∂tcγu→−∇→γc−∇→γu→]=Uα[∂tcγc00∇→γu→]=γ(c∂tcγc,u→⋅∇γu→)=γ(c∂tγ,ddt[γu→])=γ(cγ˙,γ˙u→+γu→˙)=A{\displaystyle {\begin{aligned}{\frac {d}{d\tau }}\mathbf {U} &=(\mathbf {U} \cdot {\boldsymbol {\partial }})\mathbf {U} =\mathbf {U} \cdot {\boldsymbol {\partial }}[\mathbf {U} ]=U^{\alpha }\eta _{\alpha \mu }\partial ^{\mu }\left[U^{\nu }\right]\\&=U^{\alpha }\eta _{\alpha \mu }{\begin{bmatrix}{\frac {\partial _{t}}{c}}\gamma c&{\frac {\partial _{t}}{c}}\gamma {\vec {u}}\\-{\vec {\nabla }}\gamma c&-{\vec {\nabla }}\gamma {\vec {u}}\end{bmatrix}}=U^{\alpha }{\begin{bmatrix}\ {\frac {\partial _{t}}{c}}\gamma c&0\\0&{\vec {\nabla }}\gamma {\vec {u}}\end{bmatrix}}\\[3pt]&=\gamma \left(c{\frac {\partial _{t}}{c}}\gamma c,{\vec {u}}\cdot \nabla \gamma {\vec {u}}\right)=\gamma \left(c\partial _{t}\gamma ,{\frac {d}{dt}}\left[\gamma {\vec {u}}\right]\right)=\gamma \left(c{\dot {\gamma }},{\dot {\gamma }}{\vec {u}}+\gamma {\dot {\vec {u}}}\right)=\mathbf {A} \end{aligned}}} orddτU=γddt(γc,γu→)=γ(ddt[γc],ddt[γu→])=γ(cγ˙,γ˙u→+γu→˙)=A{\displaystyle {\frac {d}{d\tau }}\mathbf {U} =\gamma {\frac {d}{dt}}(\gamma c,\gamma {\vec {u}})=\gamma \left({\frac {d}{dt}}[\gamma c],{\frac {d}{dt}}[\gamma {\vec {u}}]\right)=\gamma (c{\dot {\gamma }},{\dot {\gamma }}{\vec {u}}+\gamma {\dot {\vec {u}}})=\mathbf {A} } The Faradayelectromagnetic tensorFμν{\displaystyle F^{\mu \nu }}is a mathematical object that describes the electromagnetic field inspacetimeof a physical system.[1]: 101–128[5]:314[3]: 17–18[6]: 29–30[7]: 4 Applying the 4-gradient to make an antisymmetric tensor, one gets:Fμν=∂μAν−∂νAμ=[0−Ex/c−Ey/c−Ez/cEx/c0−BzByEy/cBz0−BxEz/c−ByBx0]{\displaystyle F^{\mu \nu }=\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}}where: By applying the 4-gradient again, and defining the4-current densityasJβ=J=(cρ,j→){\displaystyle J^{\beta }=\mathbf {J} =\left(c\rho ,{\vec {\mathbf {j} }}\right)}one can derive the tensor form of theMaxwell equations:∂αFαβ=μoJβ{\displaystyle \partial _{\alpha }F^{\alpha \beta }=\mu _{o}J^{\beta }}∂γFαβ+∂αFβγ+∂βFγα=0αβγ{\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0_{\alpha \beta \gamma }}where the second line is a version of theBianchi identity(Jacobi identity). Awavevectoris avectorwhich helps describe awave. Like any vector, it has amagnitude and direction, both of which are important: Its magnitude is either thewavenumberorangular wavenumberof the wave (inversely proportional to thewavelength), and its direction is ordinarily the direction ofwave propagation The4-wavevectorKμ{\displaystyle K^{\mu }}is the 4-gradient of the negative phaseΦ{\displaystyle \Phi }(or the negative 4-gradient of the phase) of a wave in Minkowski Space:[6]: 387Kμ=K=(ωc,k→)=∂[−Φ]=−∂[Φ]{\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)={\boldsymbol {\partial }}[-\Phi ]=-{\boldsymbol {\partial }}[\Phi ]} This is mathematically equivalent to the definition of thephaseof awave(or more specifically aplane wave):K⋅X=ωt−k→⋅x→=−Φ{\displaystyle \mathbf {K} \cdot \mathbf {X} =\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}=-\Phi } where 4-positionX=(ct,x→){\displaystyle \mathbf {X} =\left(ct,{\vec {\mathbf {x} }}\right)},ω{\displaystyle \omega }is the temporal angular frequency,k→{\displaystyle {\vec {\mathbf {k} }}}is the spatial 3-space wavevector, andΦ{\displaystyle \Phi }is the Lorentz scalar invariant phase. ∂[K⋅X]=∂[ωt−k→⋅x→]=(∂tc,−∇)[ωt−k→⋅x→]=(∂tc[ωt−k→⋅x→],−∇[ωt−k→⋅x→])=(∂tc[ωt],−∇[−k→⋅x→])=(ωc,k→)=K{\displaystyle \partial [\mathbf {K} \cdot \mathbf {X} ]=\partial \left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]=\left({\frac {\partial _{t}}{c}},-\nabla \right)\left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]=\left({\frac {\partial _{t}}{c}}\left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right],-\nabla \left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]\right)=\left({\frac {\partial _{t}}{c}}[\omega t],-\nabla \left[-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]\right)=\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=\mathbf {K} }with the assumption that the plane waveω{\displaystyle \omega }andk→{\displaystyle {\vec {\mathbf {k} }}}are not explicit functions oft{\displaystyle t}orx→{\displaystyle {\vec {\mathbf {x} }}}. The explicit form of an SR plane waveΨn(X){\displaystyle \Psi _{n}(\mathbf {X} )}can be written as:[7]: 9 Ψn(X)=Ane−i(Kn⋅X)=Anei(Φn){\displaystyle \Psi _{n}(\mathbf {X} )=A_{n}e^{-i(\mathbf {K_{n}} \cdot \mathbf {X} )}=A_{n}e^{i(\Phi _{n})}}whereAn{\displaystyle A_{n}}is a (possiblycomplex) amplitude. A general waveΨ(X){\displaystyle \Psi (\mathbf {X} )}would be thesuperpositionof multiple plane waves:Ψ(X)=∑n[Ψn(X)]=∑n[Ane−i(Kn⋅X)]=∑n[Anei(Φn)]{\displaystyle \Psi (\mathbf {X} )=\sum _{n}[\Psi _{n}(\mathbf {X} )]=\sum _{n}\left[A_{n}e^{-i(\mathbf {K_{n}} \cdot \mathbf {X} )}\right]=\sum _{n}\left[A_{n}e^{i(\Phi _{n})}\right]} Again using the 4-gradient,∂[Ψ(X)]=∂[Ae−i(K⋅X)]=−iK[Ae−i(K⋅X)]=−iK[Ψ(X)]{\displaystyle \partial [\Psi (\mathbf {X} )]=\partial \left[Ae^{-i(\mathbf {K} \cdot \mathbf {X} )}\right]=-i\mathbf {K} \left[Ae^{-i(\mathbf {K} \cdot \mathbf {X} )}\right]=-i\mathbf {K} [\Psi (\mathbf {X} )]}or∂=−iK{\displaystyle {\boldsymbol {\partial }}=-i\mathbf {K} }which is the 4-gradient version ofcomplex-valuedplane waves In special relativity, electromagnetism and wave theory, the d'Alembert operator, also called the d'Alembertian or the wave operator, is the Laplace operator of Minkowski space. The operator is named after French mathematician and physicist Jean le Rond d'Alembert. The square of∂{\displaystyle {\boldsymbol {\partial }}}is the 4-Laplacian, which is called thed'Alembert operator:[5]:300[3]: 17‒18[6]: 41[7]: 4 ∂⋅∂=∂μ⋅∂ν=∂μημν∂ν=∂ν∂ν=1c2∂2∂t2−∇→2=(∂tc)2−∇→2.{\displaystyle {\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}=\partial ^{\mu }\cdot \partial ^{\nu }=\partial ^{\mu }\eta _{\mu \nu }\partial ^{\nu }=\partial _{\nu }\partial ^{\nu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-{\vec {\nabla }}^{2}=\left({\frac {\partial _{t}}{c}}\right)^{2}-{\vec {\nabla }}^{2}.} As it is thedot productof two 4-vectors, the d'Alembertian is aLorentz invariantscalar. Occasionally, in analogy with the 3-dimensional notation, the symbols◻{\displaystyle \Box }and◻2{\displaystyle \Box ^{2}}are used for the 4-gradient and d'Alembertian respectively. More commonly however, the symbol◻{\displaystyle \Box }is reserved for the d'Alembertian. Some examples of the 4-gradient as used in the d'Alembertian follow: In theKlein–Gordonrelativistic quantum wave equation for spin-0 particles (ex.Higgs boson):[(∂⋅∂)+(m0cℏ)2]ψ=[(∂t2c2−∇→2)+(m0cℏ)2]ψ=0{\displaystyle \left[({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =\left[\left({\frac {\partial _{t}^{2}}{c^{2}}}-{\vec {\nabla }}^{2}\right)+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} In thewave equationfor theelectromagnetic field(usingLorenz gauge(∂⋅A)=(∂μAμ)=0{\displaystyle ({\boldsymbol {\partial }}\cdot \mathbf {A} )=\left(\partial _{\mu }A^{\mu }\right)=0}): where: In thewave equationof agravitational wave(using a similarLorenz gauge(∂μhTTμν)=0{\displaystyle \left(\partial _{\mu }h_{TT}^{\mu \nu }\right)=0})[6]: 274–322(∂⋅∂)hTTμν=0{\displaystyle ({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})h_{TT}^{\mu \nu }=0}wherehTTμν{\displaystyle h_{TT}^{\mu \nu }}is the transverse traceless 2-tensor representing gravitational radiation in the weak-field limit (i.e. freely propagating far from the source). Further conditions onhTTμν{\displaystyle h_{TT}^{\mu \nu }}are: In the 4-dimensional version ofGreen's function:(∂⋅∂)G[X−X′]=δ(4)[X−X′]{\displaystyle ({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})G\left[\mathbf {X} -\mathbf {X'} \right]=\delta ^{(4)}\left[\mathbf {X} -\mathbf {X'} \right]}where the 4DDelta functionis:δ(4)[X]=1(2π)4∫d4Ke−i(K⋅X){\displaystyle \delta ^{(4)}[\mathbf {X} ]={\frac {1}{(2\pi )^{4}}}\int d^{4}\mathbf {K} e^{-i(\mathbf {K} \cdot \mathbf {X} )}} Invector calculus, thedivergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a result that relates the flow (that is,flux) of avector fieldthrough asurfaceto the behavior of the vector field inside the surface. More precisely, the divergence theorem states that the outwardfluxof a vector field through a closed surface is equal to thevolume integralof thedivergenceover the region inside the surface. Intuitively, it states thatthe sum of all sources minus the sum of all sinks gives the net flow out of a region. In vector calculus, and more generally differential geometry,Stokes' theorem(also called the generalized Stokes' theorem) is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. ∫Ωd4X(∂μVμ)=∮∂ΩdS(VμNμ){\displaystyle \int _{\Omega }d^{4}X\left(\partial _{\mu }V^{\mu }\right)=\oint _{\partial \Omega }dS\left(V^{\mu }N_{\mu }\right)}or∫Ωd4X(∂⋅V)=∮∂ΩdS(V⋅N){\displaystyle \int _{\Omega }d^{4}X\left({\boldsymbol {\partial }}\cdot \mathbf {V} \right)=\oint _{\partial \Omega }dS\left(\mathbf {V} \cdot \mathbf {N} \right)}where TheHamilton–Jacobi equation(HJE) is a formulation of classical mechanics, equivalent to other formulations such asNewton's laws of motion,Lagrangian mechanicsandHamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely. The HJE is also the only formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, the HJE fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the 18th century) of finding an analogy between the propagation of light and the motion of a particle The generalized relativistic momentumPT{\displaystyle \mathbf {P_{T}} }of a particle can be written as[1]: 93–96PT=P+qA{\displaystyle \mathbf {P_{T}} =\mathbf {P} +q\mathbf {A} }whereP=(Ec,p→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {\mathbf {p} }}\right)}andA=(ϕc,a→){\displaystyle \mathbf {A} =\left({\frac {\phi }{c}},{\vec {\mathbf {a} }}\right)} This is essentially the 4-total momentumPT=(ETc,pT→){\displaystyle \mathbf {P_{T}} =\left({\frac {E_{T}}{c}},{\vec {\mathbf {p_{T}} }}\right)}of the system; atest particlein afieldusing theminimal couplingrule. There is the inherent momentum of the particleP{\displaystyle \mathbf {P} }, plus momentum due to interaction with the EM 4-vector potentialA{\displaystyle \mathbf {A} }via the particle chargeq{\displaystyle q}. The relativisticHamilton–Jacobi equationis obtained by setting the total momentum equal to the negative 4-gradient of theactionS{\displaystyle S}.PT=−∂[S]=(ETc,pT→)=(Hc,pT→)=−∂[S]=−(∂tc,−∇→)[S]{\displaystyle \mathbf {P_{T}} =-{\boldsymbol {\partial }}[S]=\left({\frac {E_{T}}{c}},{\vec {\mathbf {p_{T}} }}\right)=\left({\frac {H}{c}},{\vec {\mathbf {p_{T}} }}\right)=-{\boldsymbol {\partial }}[S]=-\left({\frac {\partial _{t}}{c}},-{\vec {\boldsymbol {\nabla }}}\right)[S]} The temporal component gives:ET=H=−∂t[S]{\displaystyle E_{T}=H=-\partial _{t}[S]} The spatial components give:pT→=∇→[S]{\displaystyle {\vec {\mathbf {p_{T}} }}={\vec {\boldsymbol {\nabla }}}[S]} whereH{\displaystyle H}is the Hamiltonian. This is actually related to the 4-wavevector being equal the negative 4-gradient of the phase from above.Kμ=K=(ωc,k→)=−∂[Φ]{\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=-{\boldsymbol {\partial }}[\Phi ]} To get the HJE, one first uses the Lorentz scalar invariant rule on the 4-momentum:P⋅P=(m0c)2{\displaystyle \mathbf {P} \cdot \mathbf {P} =(m_{0}c)^{2}} But from theminimal couplingrule:P=PT−qA{\displaystyle \mathbf {P} =\mathbf {P_{T}} -q\mathbf {A} } So:(PT−qA)⋅(PT−qA)=(PT−qA)2=(m0c)2⇒(−∂[S]−qA)2=(m0c)2{\displaystyle {\begin{aligned}\left(\mathbf {P_{T}} -q\mathbf {A} \right)\cdot \left(\mathbf {P_{T}} -q\mathbf {A} \right)=\left(\mathbf {P_{T}} -q\mathbf {A} \right)^{2}&=\left(m_{0}c\right)^{2}\\\Rightarrow \left(-{\boldsymbol {\partial }}[S]-q\mathbf {A} \right)^{2}&=\left(m_{0}c\right)^{2}\end{aligned}}} Breaking into the temporal and spatial components:(−∂t[S]c−qϕc)2−(∇[S]−qa)2=(m0c)2⇒(∇[S]−qa)2−1c2(−∂t[S]−qϕ)2+(m0c)2=0⇒(∇[S]−qa)2−1c2(∂t[S]+qϕ)2+(m0c)2=0{\displaystyle {\begin{aligned}&&\left(-{\frac {\partial _{t}[S]}{c}}-{\frac {q\phi }{c}}\right)^{2}-({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}&=(m_{0}c)^{2}\\&\Rightarrow &({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}-{\frac {1}{c^{2}}}(-\partial _{t}[S]-q\phi )^{2}+(m_{0}c)^{2}&=0\\&\Rightarrow &({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}-{\frac {1}{c^{2}}}(\partial _{t}[S]+q\phi )^{2}+(m_{0}c)^{2}&=0\end{aligned}}} where the final is the relativisticHamilton–Jacobi equation. The 4-gradient is connected withquantum mechanics. The relation between the4-momentumP{\displaystyle \mathbf {P} }and the 4-gradient∂{\displaystyle {\boldsymbol {\partial }}}gives theSchrödinger QM relations.[7]: 3–5P=(Ec,p→)=iℏ∂=iℏ(∂tc,−∇→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {p}}\right)=i\hbar {\boldsymbol {\partial }}=i\hbar \left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)} The temporal component gives:E=iℏ∂t{\displaystyle E=i\hbar \partial _{t}} The spatial components give:p→=−iℏ∇→{\displaystyle {\vec {p}}=-i\hbar {\vec {\nabla }}} This can actually be composed of two separate steps. First:[1]: 82–84 P=(Ec,p→)=ℏK=ℏ(ωc,k→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {p}}\right)=\hbar \mathbf {K} =\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)}which is the full 4-vector version of: The (temporal component)Planck–Einstein relationE=ℏω{\displaystyle E=\hbar \omega } The (spatial components)de Brogliematter waverelationp→=ℏk→{\displaystyle {\vec {p}}=\hbar {\vec {k}}} Second:[5]:300 K=(ωc,k→)=i∂=i(∂tc,−∇→){\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},{\vec {k}}\right)=i{\boldsymbol {\partial }}=i\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)}which is just the 4-gradient version of thewave equationforcomplex-valuedplane waves The temporal component gives:ω=i∂t{\displaystyle \omega =i\partial _{t}} The spatial components give:k→=−i∇→{\displaystyle {\vec {k}}=-i{\vec {\nabla }}} In quantum mechanics (physics), thecanonical commutation relationis the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). The 4-gradient is a component in several of the relativistic wave equations:[5]:300–309[3]: 25, 30–31, 55–69 In theKlein–Gordon relativistic quantum wave equationfor spin-0 particles (ex.Higgs boson):[7]: 5[(∂μ∂μ)+(m0cℏ)2]ψ=0{\displaystyle \left[\left(\partial ^{\mu }\partial _{\mu }\right)+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} In theDirac relativistic quantum wave equationfor spin-1/2 particles (ex.electrons):[7]: 130[iγμ∂μ−m0cℏ]ψ=0{\displaystyle \left[i\gamma ^{\mu }\partial _{\mu }-{\frac {m_{0}c}{\hbar }}\right]\psi =0} whereγμ{\displaystyle \gamma ^{\mu }}are theDirac gamma matricesandψ{\displaystyle \psi }is a relativisticwave function. ψ{\displaystyle \psi }isLorentz scalarfor the Klein–Gordon equation, and aspinorfor the Dirac equation. It is nice that the gamma matrices themselves refer back to the fundamental aspect of SR, the Minkowski metric:[7]: 130{γμ,γν}=γμγν+γνγμ=2ημνI4{\displaystyle \left\{\gamma ^{\mu },\gamma ^{\nu }\right\}=\gamma ^{\mu }\gamma ^{\nu }+\gamma ^{\nu }\gamma ^{\mu }=2\eta ^{\mu \nu }I_{4}} Conservation of 4-probability current density follows from the continuity equation:[7]: 6∂⋅J=∂tρ+∇→⋅j→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {J} =\partial _{t}\rho +{\vec {\boldsymbol {\nabla }}}\cdot {\vec {\mathbf {j} }}=0} The4-probability current densityhas the relativistically covariant expression:[7]: 6Jprobμ=iℏ2m0(ψ∗∂μψ−ψ∂μψ∗){\displaystyle J_{\text{prob}}^{\mu }={\frac {i\hbar }{2m_{0}}}\left(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*}\right)} The4-charge current densityis just the charge (q) times the 4-probability current density:[7]: 8Jchargeμ=iℏq2m0(ψ∗∂μψ−ψ∂μψ∗){\displaystyle J_{\text{charge}}^{\mu }={\frac {i\hbar q}{2m_{0}}}\left(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*}\right)} Relativistic wave equationsuse 4-vectors in order to be covariant.[3][7] Start with the standard SR 4-vectors:[1] Note the following simple relations from the previous sections, where each 4-vector is related to another by aLorentz scalar: Now, just apply the standard Lorentz scalar product rule to each one:U⋅U=c2P⋅P=(m0c)2K⋅K=(m0cℏ)2∂⋅∂=(−im0cℏ)2=−(m0cℏ)2{\displaystyle {\begin{aligned}\mathbf {U} \cdot \mathbf {U} &=c^{2}\\\mathbf {P} \cdot \mathbf {P} &=(m_{0}c)^{2}\\\mathbf {K} \cdot \mathbf {K} &=\left({\frac {m_{0}c}{\hbar }}\right)^{2}\\{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}&=\left({\frac {-im_{0}c}{\hbar }}\right)^{2}=-\left({\frac {m_{0}c}{\hbar }}\right)^{2}\end{aligned}}} The last equation (with the 4-gradient scalar product) is a fundamental quantum relation. When applied to a Lorentz scalar fieldψ{\displaystyle \psi }, one gets the Klein–Gordon equation, the most basic of the quantumrelativistic wave equations:[7]: 5–8[∂⋅∂+(m0cℏ)2]ψ=0{\displaystyle \left[{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} TheSchrödinger equationis the low-velocitylimiting case(|v| ≪c) of theKlein–Gordon equation.[7]: 7–8 If the quantum relation is applied to a 4-vector fieldAμ{\displaystyle A^{\mu }}instead of a Lorentz scalar fieldψ{\displaystyle \psi }, then one gets theProca equation:[7]: 361[∂⋅∂+(m0cℏ)2]Aμ=0μ{\displaystyle \left[{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]A^{\mu }=0^{\mu }} If the rest mass term is set to zero (light-like particles), then this gives the freeMaxwell equation:[∂⋅∂]Aμ=0μ{\displaystyle [{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}]A^{\mu }=0^{\mu }} More complicated forms and interactions can be derived by using theminimal couplingrule: In modernelementaryparticle physics, one can define agauge covariant derivativewhich utilizes the extra RQM fields (internal particle spaces) now known to exist. The version known from classical EM (in natural units) is:[3]: 39Dμ=∂μ−igAμ{\displaystyle D^{\mu }=\partial ^{\mu }-igA^{\mu }} The full covariant derivative for thefundamental interactionsof theStandard Modelthat we are presently aware of (innatural units) is:[3]: 35–53 Dμ=∂μ−ig112YBμ−ig212τi⋅Wiμ−ig312λa⋅Gaμ{\displaystyle D^{\mu }=\partial ^{\mu }-ig_{1}{\frac {1}{2}}YB^{\mu }-ig_{2}{\frac {1}{2}}\tau _{i}\cdot W_{i}^{\mu }-ig_{3}{\frac {1}{2}}\lambda _{a}\cdot G_{a}^{\mu }}orD=∂−ig112YB−ig212τi⋅Wi−ig312λa⋅Ga{\displaystyle \mathbf {D} ={\boldsymbol {\partial }}-ig_{1}{\frac {1}{2}}Y\mathbf {B} -ig_{2}{\frac {1}{2}}{\boldsymbol {\tau }}_{i}\cdot \mathbf {W} _{i}-ig_{3}{\frac {1}{2}}{\boldsymbol {\lambda }}_{a}\cdot \mathbf {G} _{a}} where the scalar product summations (⋅{\displaystyle \cdot }) here refer to the internal spaces, not the tensor indices: Thecoupling constants(g1,g2,g3){\displaystyle (g_{1},g_{2},g_{3})}are arbitrary numbers that must be discovered from experiment. It is worth emphasizing that for thenon-abeliantransformations once thegi{\displaystyle g_{i}}are fixed for one representation, they are known for all representations. These internal particle spaces have been discovered empirically.[3]: 47 In three dimensions, the gradient operator maps a scalar field to a vector field such that the line integral between any two points in the vector field is equal to the difference between the scalar field at these two points. Based on this, it mayappearincorrectlythat the natural extension of the gradient to 4 dimensionsshouldbe:∂α=?(∂∂t,∇→),{\displaystyle \partial ^{\alpha }{\overset {?}{=}}\left({\frac {\partial }{\partial t}},{\vec {\nabla }}\right),}which isincorrect. However, a line integral involves the application of the vector dot product, and when this is extended to 4-dimensional spacetime, a change of sign is introduced to either the spatial co-ordinates or the time co-ordinate depending on the convention used. This is due to the non-Euclidean nature of spacetime. In this article, we place a negative sign on the spatial coordinates (the time-positive metric conventionημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta ^{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}). The factor of (1/c) is to keep the correctunit dimensionality, [length]−1, for all components of the 4-vector and the (−1) is to keep the 4-gradientLorentz covariant. Adding these two corrections to the above expression gives thecorrectdefinition of 4-gradient:[1]: 55–56[3]: 16∂α=(1c∂∂t,−∇→){\displaystyle \partial ^{\alpha }=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},-{\vec {\nabla }}\right)} Regarding the use of scalars, 4-vectors and tensors in physics, various authors use slightly different notations for the same equations. For instance, some usem{\displaystyle m}for invariant rest mass, others usem0{\displaystyle m_{0}}for invariant rest mass and usem{\displaystyle m}for relativistic mass. Many authors set factors ofc{\displaystyle c}andℏ{\displaystyle \hbar }andG{\displaystyle G}to dimensionless unity. Others show some or all the constants. Some authors usev{\displaystyle v}for velocity, others useu{\displaystyle u}. Some useK{\displaystyle K}as a 4-wavevector (to pick an arbitrary example). Others usek{\displaystyle k}orK{\displaystyle \mathbf {K} }orkμ{\displaystyle k^{\mu }}orkμ{\displaystyle k_{\mu }}orKν{\displaystyle K^{\nu }}orN{\displaystyle N}, etc. Some write the 4-wavevector as(ωc,k){\displaystyle \left({\frac {\omega }{c}},\mathbf {k} \right)}, some as(k,ωc){\displaystyle \left(\mathbf {k} ,{\frac {\omega }{c}}\right)}or(k0,k){\displaystyle \left(k^{0},\mathbf {k} \right)}or(k0,k1,k2,k3){\displaystyle \left(k^{0},k^{1},k^{2},k^{3}\right)}or(k1,k2,k3,k4){\displaystyle \left(k^{1},k^{2},k^{3},k^{4}\right)}or(kt,kx,ky,kz){\displaystyle \left(k_{t},k_{x},k_{y},k_{z}\right)}or(k1,k2,k3,ik4){\displaystyle \left(k^{1},k^{2},k^{3},ik^{4}\right)}. Some will make sure that the dimensional units match across the 4-vector, others do not. Some refer to the temporal component in the 4-vector name, others refer to the spatial component in the 4-vector name. Some mix it throughout the book, sometimes using one then later on the other. Some use the metric(+ − − −), others use the metric(− + + +). Some don't use 4-vectors, but do everything as the old styleEand 3-space vectorp. The thing is, all of these are just notational styles, with some more clear and concise than the others. The physics is the same as long as one uses a consistent style throughout the whole derivation.[7]: 2–4
https://en.wikipedia.org/wiki/Four-gradient
Aneural networkis a group of interconnected units calledneuronsthat send signals to one another. Neurons can be eitherbiological cellsormathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In the context of biology, a neural network is a population of biologicalneuronschemically connected to each other bysynapses. A given neuron can be connected to hundreds of thousands of synapses.[1]Each neuron sends and receiveselectrochemicalsignals calledaction potentialsto its connected neighbors. A neuron can serve anexcitatoryrole, amplifying and propagating signals it receives, or aninhibitoryrole, suppressing signals instead.[1] Populations of interconnected neurons that are smaller than neural networks are calledneural circuits. Very large interconnected networks are calledlarge scale brain networks, and many of these together formbrainsandnervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and acrossneuromuscular junctionstomuscle cells, where they cause contraction and thereby motion.[2] In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines,[3]today they are almost always implemented insoftware. Neuronsin an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).[4]The "signal" input to each neuron is a number, specifically alinear combinationof the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to itsactivation function. The behavior of the network depends on the strengths (orweights) of the connections between neurons. A network is trained by modifying these weights throughempirical risk minimizationorbackpropagationin order to fit some preexisting dataset.[5] The termdeep neural networkrefers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems inartificial intelligence, and have thereby found applications in many disciplines, includingpredictive modeling,adaptive control,facial recognition,handwriting recognition,general game playing, andgenerative AI. The theoretical base for contemporary neural networks was independently proposed byAlexander Bainin 1873[6]andWilliam Jamesin 1890.[7]Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,Donald HebbdescribedHebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.[8] Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach ofconnectionism. However, starting with the invention of theperceptron, a simple artificial neural network, byWarren McCullochandWalter Pittsin 1943,[9]followed by the implementation of one in hardware byFrank Rosenblattin 1957,[3]artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
https://en.wikipedia.org/wiki/Neural_network#Feedforward_neural_networks
Akernel smootheris astatisticaltechnique to estimate a real valuedfunctionf:Rp→R{\displaystyle f:\mathbb {R} ^{p}\to \mathbb {R} }as theweighted averageof neighboring observed data. The weight is defined by thekernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter. Kernel smoothing is a type ofweighted moving average. LetKhλ(X0,X){\displaystyle K_{h_{\lambda }}(X_{0},X)}be a kernel defined by where: Popularkernelsused for smoothing include parabolic (Epanechnikov), tricube, andGaussiankernels. LetY(X):Rp→R{\displaystyle Y(X):\mathbb {R} ^{p}\to \mathbb {R} }be a continuous function ofX. For eachX0∈Rp{\displaystyle X_{0}\in \mathbb {R} ^{p}}, the Nadaraya-Watson kernel-weighted average (smoothY(X) estimation) is defined by where: In the following sections, we describe some particular cases of kernel smoothers. TheGaussian kernelis one of the most widely used kernels, and is expressed with the equation below. Here, b is the length scale for the input space. Thek-nearest neighbor algorithmcan be used for defining ak-nearest neighbor smootheras follows. For each pointX0, takemnearest neighbors and estimate the value ofY(X0) by averaging the values of these neighbors. Formally,hm(X0)=‖X0−X[m]‖{\displaystyle h_{m}(X_{0})=\left\|X_{0}-X_{[m]}\right\|}, whereX[m]{\displaystyle X_{[m]}}is themth closest toX0neighbor, and In this example,Xis one-dimensional. For each X0, theY^(X0){\displaystyle {\hat {Y}}(X_{0})}is an average value of 16 closest toX0points (denoted by red). The idea of the kernel average smoother is the following. For each data pointX0, choose a constant distance sizeλ(kernel radius, or window width forp= 1 dimension), and compute a weighted average for all data points that are closer thanλ{\displaystyle \lambda }toX0(the closer toX0points get higher weights). Formally,hλ(X0)=λ=constant,{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}},}andD(t) is one of the popular kernels. For eachX0the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to theX0) in the window, when theX0is close enough to the boundary. In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimationY^(X0){\displaystyle {\hat {Y}}(X_{0})}is provided by the value of this line atX0point. By repeating this procedure for eachX0, one can get the estimation functionY^(X){\displaystyle {\hat {Y}}(X)}. Like in previous section, the window width is constanthλ(X0)=λ=constant.{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}}.}Formally, the local linear regression is computed by solving a weighted least square problem. For one dimension (p= 1): minα(X0),β(X0)∑i=1NKhλ(X0,Xi)(Y(Xi)−α(X0)−β(X0)Xi)2⇓Y^(X0)=α(X0)+β(X0)X0{\displaystyle {\begin{aligned}&\min _{\alpha (X_{0}),\beta (X_{0})}\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-\alpha (X_{0})-\beta (X_{0})X_{i}\right)^{2}}\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Downarrow \\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\hat {Y}}(X_{0})=\alpha (X_{0})+\beta (X_{0})X_{0}\\\end{aligned}}} The closed form solution is given by: where: The resulting function is smooth, and the problem with the biased boundary points is reduced. Local linear regression can be applied to any-dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference). Instead of fitting locally linear functions, one can fit polynomial functions. For p=1, one should minimize: withY^(X0)=α(X0)+∑j=1dβj(X0)X0j{\displaystyle {\hat {Y}}(X_{0})=\alpha (X_{0})+\sum \limits _{j=1}^{d}{\beta _{j}(X_{0})X_{0}^{j}}} In general case (p>1), one should minimize:
https://en.wikipedia.org/wiki/Nearest_neighbor_smoothing
Thislist of sequence alignment softwareis a compilation of software tools and web portals used in pairwisesequence alignmentandmultiple sequence alignment. Seestructural alignment softwareforstructural alignmentof proteins. *Sequence type:protein or nucleotide *Sequence type:protein or nucleotide **Alignment type:local or global *Sequence type:protein or nucleotide. **Alignment type:local or global *Sequence type:protein or nucleotide *Sequence type:protein or nucleotide Please seeList of alignment visualization software. [51][52]
https://en.wikipedia.org/wiki/Sequence_alignment_software
TheKimball lifecycleis a methodology for developingdata warehouses, and has been developed byRalph Kimballand a variety of colleagues. The methodology "covers a sequence of high level tasks for the effectivedesign,developmentanddeployment" of a data warehouse orbusiness intelligencesystem.[1]It is considered a "bottom-up" approach to data warehousing as pioneered by Ralph Kimball, in contrast to the older "top-down" approach pioneered byBill Inmon.[2] According to Ralph Kimball et al., the planning phase is the start of the lifecycle. It is aplanningphase in whichprojectis a single iteration of the lifecycle whileprogramis the broader coordination of resources. When launching a project or program Kimball et al. suggests following three focus areas: This is an ongoing discipline in the project. The purpose is to keep the project/program on course, develop a communication plan and manage expectations. This phase ormilestoneof the project is about making theproject teamunderstand thebusiness requirements. Its purpose is to establish a foundation for all the following activities in the lifecycle. Kimball et al. makes it clear that it is important for the project team to talk with the business users, and team members should be prepared to focus on listening and to document the user interviews. An output of this step is theenterprise bus matrix. The top track holds two milestones: Dimensional modelingis a process in which the business requirements are used to design dimensional models for the system. Physical design is the phase where the database is designed. It involves the database environment as well as security. Extract, transform, load(ETL) design and development is the design of some of the heavy procedures in the data warehouse and business intelligence system. Kimball et al. suggests four parts to this process, which are further divided into 34 subsystems:[3] Business intelligence application designdeals with designing and selecting some applications to support the business requirements. Business intelligence application development use the design to develop and validate applications to support the business requirements. When the three tracks are complete they all end up in the finaldeployment. This phase requires planning and should includepre-deployment testing,documentation, training and maintenance andsupport. When the deployment has finished the system will need proper maintenance to stay alive. This includesdata reconciliation, execution and monitoring andperformance tuning. As the project can be seen as part of the larger iterative program, it is likely that the system will want to expand. There will be projects to add new data as well as reaching new segments of the business areas. The lifecycle then starts over again.
https://en.wikipedia.org/wiki/The_Kimball_lifecycle
Inmathematics, thetransfer operatorencodes information about aniterated mapand is frequently used to study the behavior ofdynamical systems,statistical mechanics,quantum chaosandfractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is theinvariant measureof the system. The transfer operator is sometimes called theRuelle operator, afterDavid Ruelle, or thePerron–Frobenius operatororRuelle–Perron–Frobenius operator, in reference to the applicability of thePerron–Frobenius theoremto the determination of theeigenvaluesof the operator. The iterated function to be studied is a mapf:X→X{\displaystyle f\colon X\rightarrow X}for an arbitrary setX{\displaystyle X}. The transfer operator is defined as an operatorL{\displaystyle {\mathcal {L}}}acting on the space of functions{Φ:X→C}{\displaystyle \{\Phi \colon X\rightarrow \mathbb {C} \}}as whereg:X→C{\displaystyle g\colon X\rightarrow \mathbb {C} }is an auxiliary valuation function. Whenf{\displaystyle f}has aJacobiandeterminant|J|{\displaystyle |J|}, theng{\displaystyle g}is usually taken to beg=1/|J|{\displaystyle g=1/|J|}. The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoreticpushforwardofg: in essence, the transfer operator is thedirect image functorin the category ofmeasurable spaces. The left-adjoint of the Perron–Frobenius operator is theKoopman operatororcomposition operator. The general setting is provided by theBorel functional calculus. As a general rule, the transfer operator can usually be interpreted as a (left-)shift operatoracting on ashift space. The most commonly studied shifts are thesubshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include theJacobi operatorand theHessenberg matrix, both of which generate systems oforthogonal polynomialsvia a right-shift. Whereas the iteration of a functionf{\displaystyle f}naturally leads to a study of the orbits of points of X under iteration (the study ofpoint dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear inphysicsproblems, such asquantum chaosandstatistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications torational drug design, through the field ofmolecular dynamics. It is often the case that the transfer operator is positive, has discrete positive real-valuedeigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator. Theeigenfunctionsof the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantumHamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selectedensembleof quantum states will encompass a large number of very different fractal eigenstates with non-zerosupportover the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time and the increase ofentropy. The transfer operator of theBernoulli mapb(x)=2x−⌊2x⌋{\displaystyle b(x)=2x-\lfloor 2x\rfloor }is exactly solvable and is a classic example ofdeterministic chaos; the discrete eigenvalues correspond to theBernoulli polynomials. This operator also has a continuous spectrum consisting of theHurwitz zeta function. The transfer operator of the Gauss maph(x)=1/x−⌊1/x⌋{\displaystyle h(x)=1/x-\lfloor 1/x\rfloor }is called theGauss–Kuzmin–Wirsing (GKW) operator. The theory of the GKW dates back to a hypothesis by Gauss oncontinued fractionsand is closely related to theRiemann zeta function.
https://en.wikipedia.org/wiki/Transfer_operator
Exception(s),The Exception(s), orexceptionalmay refer to:
https://en.wikipedia.org/wiki/Exception#Synchronous_exceptions
Inmathematics, the concept of ageneralised metricis a generalisation of that of ametric, in which the distance is not areal numberbut taken from an arbitraryordered field. In general, when we definemetric spacethe distance function is taken to be a real-valuedfunction. The real numbers form an ordered field which isArchimedeanandorder complete. These metric spaces have some nice properties like: in a metric spacecompactness,sequential compactnessandcountable compactnessare equivalent etc. These properties may not, however, hold so easily if the distance function is taken in an arbitrary ordered field, instead of inR.{\displaystyle \scriptstyle \mathbb {R} .} Let(F,+,⋅,<){\displaystyle (F,+,\cdot ,<)}be an arbitrary ordered field, andM{\displaystyle M}a nonempty set; a functiond:M×M→F+∪{0}{\displaystyle d:M\times M\to F^{+}\cup \{0\}}is called a metric onM,{\displaystyle M,}if the following conditions hold: It is not difficult to verify that the open ballsB(x,δ):={y∈M:d(x,y)<δ}{\displaystyle B(x,\delta )\;:=\{y\in M\;:d(x,y)<\delta \}}form a basis for a suitable topology, the latter called themetric topologyonM,{\displaystyle M,}with the metric inF.{\displaystyle F.} SinceF{\displaystyle F}in itsorder topologyismonotonically normal, we would expectM{\displaystyle M}to be at leastregular. However, underaxiom of choice, every general metric ismonotonically normal, for, givenx∈G,{\displaystyle x\in G,}whereG{\displaystyle G}is open, there is an open ballB(x,δ){\displaystyle B(x,\delta )}such thatx∈B(x,δ)⊆G.{\displaystyle x\in B(x,\delta )\subseteq G.}Takeμ(x,G)=B(x,δ/2).{\displaystyle \mu (x,G)=B\left(x,\delta /2\right).}Verify the conditions for Monotone Normality. The matter of wonder is that, even without choice, general metrics aremonotonically normal. proof. Case I:F{\displaystyle F}is anArchimedean field. Now, ifx{\displaystyle x}inG,G{\displaystyle G,G}open, we may takeμ(x,G):=B(x,1/2n(x,G)),{\displaystyle \mu (x,G):=B(x,1/2n(x,G)),}wheren(x,G):=min{n∈N:B(x,1/n)⊆G},{\displaystyle n(x,G):=\min\{n\in \mathbb {N} :B(x,1/n)\subseteq G\},}and the trick is done without choice. Case II:F{\displaystyle F}is a non-Archimedean field. For givenx∈G{\displaystyle x\in G}whereG{\displaystyle G}is open, consider the setA(x,G):={a∈F:for alln∈N,B(x,n⋅a)⊆G}.{\displaystyle A(x,G):=\{a\in F:{\text{ for all }}n\in \mathbb {N} ,B(x,n\cdot a)\subseteq G\}.} The setA(x,G){\displaystyle A(x,G)}is non-empty. For, asG{\displaystyle G}is open, there is an open ballB(x,k){\displaystyle B(x,k)}withinG.{\displaystyle G.}Now, asF{\displaystyle F}is non-Archimdedean,NF{\displaystyle \mathbb {N} _{F}}is not bounded above, hence there is someξ∈F{\displaystyle \xi \in F}such that for alln∈N,{\displaystyle n\in \mathbb {N} ,}n⋅1≤ξ.{\displaystyle n\cdot 1\leq \xi .}Puttinga=k⋅(2ξ)−1,{\displaystyle a=k\cdot (2\xi )^{-1},}we see thata{\displaystyle a}is inA(x,G).{\displaystyle A(x,G).} Now defineμ(x,G)=⋃{B(x,a):a∈A(x,G)}.{\displaystyle \mu (x,G)=\bigcup \{B(x,a):a\in A(x,G)\}.}We would show that with respect to this mu operator, the space is monotonically normal. Note thatμ(x,G)⊆G.{\displaystyle \mu (x,G)\subseteq G.} Ify{\displaystyle y}is not inG{\displaystyle G}(open set containingx{\displaystyle x}) andx{\displaystyle x}is not inH{\displaystyle H}(open set containingy{\displaystyle y}), then we'd show thatμ(x,G)∩μ(y,H){\displaystyle \mu (x,G)\cap \mu (y,H)}is empty. If not, sayz{\displaystyle z}is in the intersection. Then∃a∈A(x,G):d(x,z)<a;∃b∈A(y,H):d(z,y)<b.{\displaystyle \exists a\in A(x,G)\colon d(x,z)<a;\;\;\exists b\in A(y,H)\colon d(z,y)<b.} From the above, we get thatd(x,y)≤d(x,z)+d(z,y)<2⋅max{a,b},{\displaystyle d(x,y)\leq d(x,z)+d(z,y)<2\cdot \max\{a,b\},}which is impossible since this would imply that eithery{\displaystyle y}belongs toμ(x,G)⊆G{\displaystyle \mu (x,G)\subseteq G}orx{\displaystyle x}belongs toμ(y,H)⊆H.{\displaystyle \mu (y,H)\subseteq H.}This completes the proof.
https://en.wikipedia.org/wiki/Generalised_metric
ABoolean-valued function(sometimes called apredicateor aproposition) is afunctionof the type f : X →B, where X is an arbitrarysetand whereBis aBoolean domain, i.e. a generic two-element set, (for exampleB= {0, 1}), whose elements are interpreted aslogical values, for example, 0 =falseand 1 =true, i.e., a singlebitofinformation. In theformal sciences,mathematics,mathematical logic,statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function,indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the correspondingsemioticsign or syntactic expression. Informal semantictheories oftruth, atruth predicateis a predicate on thesentencesof aformal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a finaltruth value.
https://en.wikipedia.org/wiki/Boolean-valued_function
Incomputer science,lazy deletionrefers to a method of deleting elements from ahash tablethat usesopen addressing. In this method, deletions are done by marking an element as deleted, rather than erasing it entirely. Deleted locations are treated as empty when inserting and as occupied during a search. The deleted locations are sometimes referred to astombstones.[1] The problem with this scheme is that as the number of delete/insert operations increases, the cost of a successful search increases. To improve this, when an element is searched and found in the table, the element is relocated to the first location marked for deletion that was probed during the search. Instead of finding an element to relocate when the deletion occurs, the relocation occurs lazily during the next search.[2][3] Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Lazy_deletion
Apackage managerorpackage management systemis a collection ofsoftware toolsthat automates the process of installing, upgrading, configuring, and removingcomputer programsfor acomputerin a consistent manner.[1] A package manager deals withpackages, distributions of software and data inarchive files. Packages containmetadata, such as the software's name, description of its purpose, version number, vendor,checksum(preferably acryptographic hash function), and a list ofdependenciesnecessary for the software to run properly. Upon installation, metadata is stored in a local packagedatabase. Package managers typically maintain a database of software dependencies and version information to prevent software mismatches and missing prerequisites. They work closely withsoftware repositories,binary repository managers, andapp stores. Package managers are designed to eliminate the need for manual installs and updates. This can be particularly useful for large enterprises whose operating systems typically consist of hundreds or even tens of thousands of distinct software packages.[2] An early package manager was SMIT (and its backend installp) fromIBM AIX.SMITwas introduced with AIX 3.0 in 1989.[citation needed] Early package managers, from around 1994, had no automatic dependency resolution[3]but could already drastically simplify the process of adding and removing software from a running system.[4] By around 1995, beginning withCPAN, package managers began doing the work of downloading packages from a repository, automatically resolving its dependencies and installing them as needed, making it much easier to install, uninstall and update software from a system.[5] A software package is anarchive filecontaining a computer program as well as necessary metadata for its deployment. The computer program can be insource codethat has to be compiled and built first.[6]Package metadata include package description, package version, and dependencies (other packages that need to be installed beforehand). Package managers are charged with the task of finding, installing, maintaining or uninstalling software packages upon the user's command. Typical functions of a package management system include: Computer systems that rely ondynamic librarylinking, instead ofstatic librarylinking, share executable libraries of machine instructions across packages and applications. In these systems, conflicting relationships between different packages requiring different versions of libraries results in a challenge colloquially known as "dependency hell". OnMicrosoft Windowssystems, this is also called "DLL hell" when working with dynamically linked libraries.[7] Modern package managers have mostly solved these problems, by allowing parallel installation of multiple versions of a library (e.g.OPENSTEP'sFrameworksystem), a dependency of any kind (e.g.slotsin GentooPortage), and even of packages compiled with different compiler versions (e.g. dynamic libraries built by theGlasgow Haskell Compiler, where a stableABIdoes not exist), in order to enable other packages to specify which version they were linked or even installed against. System administratorsmay install and maintain software using tools other than package management software. For example, a local administrator maydownloadunpackaged source code, compile it, and install it. This may cause the state of the local system to fall out ofsynchronizationwith the state of the package manager'sdatabase. The local administrator will be required to take additional measures, such as manually managing some dependencies or integrating the changes into the package manager. There are tools available to ensure that locally compiled packages are integrated with the package management. For distributions based on .deb and.rpmfiles as well asSlackware Linux, there isCheckInstall, and for recipe-based systems such asGentoo Linuxand hybrid systems such asArch Linux, it is possible to write a recipe first, which then ensures that the package fits into the local package database.[citation needed] Particularly troublesome with softwareupgradesare upgrades of configuration files. Since package managers, at least on Unix systems, originated as extensions offile archiving utilities, they can usually only either overwrite or retain configuration files, rather than applying rules to them. There are exceptions to this that usually apply to kernel configuration (which, if broken, will render the computer unusable after a restart). Problems can be caused if the format of configuration files changes; for instance, if the old configuration file does not explicitly disable new options that should be disabled. Some package managers, such asDebian'sdpkg, allow configuration during installation. In other situations, it is desirable to install packages with the default configuration and then overwrite this configuration, for instance, inheadlessinstallations to a large number of computers. This kind of pre-configured installation is also supported by dpkg. To give users more control over the kinds of software that they are allowing to be installed on their system (and sometimes due to legal or convenience reasons on the distributors' side), software is often downloaded from a number ofsoftware repositories.[8] When a user interacts with the package management software to bring about an upgrade, it is customary to present the user with the list of actions to be executed (usually the list of packages to be upgraded, and possibly giving the old and new version numbers), and allow the user to either accept the upgrade in bulk, or select individual packages for upgrades. Many package managers can be configured to never upgrade certain packages, or to upgrade them only when critical vulnerabilities or instabilities are found in the previous version, as defined by the packager of the software. This process is sometimes calledversion pinning. For instance: Some of the more advanced package management features offer "cascading package removal",[10]in which all packages that depend on the target package and all packages that only the target package depends on, are also removed. Although the commands are specific for every particular package manager, they are to a large extent translatable, as most package managers offer similar functions. TheArch LinuxPacman/Rosetta wiki offers an extensive overview.[16] Package managers likedpkghave existed as early as 1994.[17] Linux distributionsoriented to binary packages rely heavily on package management systems as their primary means of managing and maintaining software. Mobile operating systems such asAndroid(Linux-based) andiOS(Unix-based) rely almost exclusively on their respective vendors'app storesand thus use their own dedicated package management systems. A package manager is often called an "install manager", which can lead to a confusion between package managers andinstallers. The differences include: Mostsoftware configuration managementsystems treat building software and deploying software as separate, independent steps. Abuild automationutility typically takes human-readablesource codefiles already on a computer, and automates the process of converting them into a binary executable package on the same or remote computer. Later a package manager typically running on some other computer downloads those pre-built binary executable packages over the internet and installs them. However, both kinds of tools have many commonalities: A few tools, such asMaakandA-A-P, are designed to handle both building and deployment, and can be used as either a build automation utility or as a package manager or both.[18] App storescan also be considered application-level package managers (without the ability to install all levels of programs[19][20]). Unlike traditional package managers, app stores are designed to enable payment for the software itself (instead of for software development), and may only offer monolithic packages with no dependencies or dependency resolution.[21][20]They are usually extremely limited in their management functionality, due to a strong focus on simplification over power oremergence, and common in commercial operating systems and locked-down “smart” devices. Package managers also often have only human-reviewed code. Many app stores, such as Google Play and Apple's App Store, screen apps mostly using automated tools only; malware withdefeat devicescan pass these tests, by detecting when the software is being automatically tested and delaying malicious activity.[22][23][24]There are, however, exceptions; thenpmpackage database, for instance, relies entirely onpost-publication reviewof its code,[25][26]while theDebianpackage database has an extensive human review process before any package goes into the main stable database. TheXZ Utils backdoorused years of trust-building to insert a backdoor, which was nonetheless caught while in the testing database. Also known asbinary repository manager, it is a software tool designed to optimize the download and storage of binary files, artifacts and packages used and produced in thesoftware development process.[27]These package managers aim to standardize the way enterprises treat all package types. They give users the ability to apply security and compliance metrics across all artifact types. Universal package managers have been referred to as being at the center of aDevOps toolchain.[28] Each package manager relies on the format and metadata of the packages it can manage. That is, package managers need groups of files to be bundled for the specific package manager along with appropriate metadata, such as dependencies. Often, a core set of utilities manages the basic installation from these packages and multiple package managers use these utilities to provide additional functionality. For example,yumrelies onrpmas abackend. Yum extends the functionality of the backend by adding features such as simple configuration for maintaining a network of systems. As another example, theSynaptic Package Managerprovides a graphical user interface by using theAdvanced Packaging Tool (apt)library, which, in turn, relies ondpkgfor core functionality. Alienis a program that converts between differentLinux package formats, supporting conversion betweenLinux Standard Base(LSB) compliant.rpmpackages,.deb, Stampede (.slp),Solaris(.pkg) andSlackware(.tgz,.txz, .tbz, .tlz) packages. In mobile operating systems,Google PlayconsumesAndroid application package(APK) package format whileMicrosoft StoreusesAPPXandXAPformats. (Both Google Play and Microsoft Store have eponymous package managers.) By the nature offree and open source software, packages under similar and compatible licenses are available for use on a number of operating systems. These packages can be combined and distributed using configurable and internally complex packaging systems to handle many permutations of software and manage version-specific dependencies and conflicts. Some packaging systems of free and open source software are also themselves released as free and open source software. One typical difference between package management in proprietary operating systems, such as Mac OS X and Windows, and those in free and open source software, such as Linux, is that free and open source software systems permit third-party packages to also be installed and upgraded through the same mechanism, whereas the package managers of Mac OS X and Windows will only upgrade software provided by Apple and Microsoft, respectively (with the exception of some third party drivers in Windows). The ability to continuously upgrade third-party software is typically added by adding theURLof the corresponding repository to the package management's configuration file. Beside the system-level application managers, there are some add-on package managers for operating systems with limited capabilities and forprogramming languagesin which developers need the latestlibraries. Unlike system-level package managers, application-level package managers focus on a small part of the software system. They typically reside within a directory tree that is not maintained by the system-level package manager, such asc:\cygwinor/opt/sw.[29]However, this might not be the case for the package managers that deal with programming libraries, leading to a possible conflict as both package managers may claim to "own" a file and might break upgrades. Ian Murdockhad commented that package management is "the single biggest advancementLinuxhas brought to the industry", that it blurs the boundaries between operating system and applications, and that it makes it "easier to push new innovations [...] into the marketplace and [...] evolve the OS".[30] There is also a conference for package manager developers known as PackagingCon. It was established in 2021 with the aim to understand different approaches to package management.[31]
https://en.wikipedia.org/wiki/Package_manager